title
stringlengths
1
827
uuid
stringlengths
36
36
pmc_id
stringlengths
5
8
search_term
stringclasses
44 values
text
stringlengths
8
8.58M
Strategies for Optimizing Acute Burn Wound Therapy: A Comprehensive Review
a9b5fe9b-0227-4c61-928a-0f5a01b93312
11766551
Surgical Procedures, Operative[mh]
Globally, burns rank as the fourth most frequent cause of accidental injury, following traffic accidents, falls, and drownings, leading to the need for medical care for millions of patients annually. Burn injuries lead to intricate biological effects, triggering a series of complex processes, such as immune and inflammatory responses, metabolic imbalances, and burn-induced coagulopathy. Therapeutic strategies for severely burned patients involve a combination of critical care treatment and targeted wound management, supported by a multidisciplinary team. Despite significant advancements in burn treatment, the complexity of these injuries continues to present challenges in managing both systemic and wound-specific complications . It is crucial to fully grasp the complexities of the various mechanisms driving microvascular dysfunction when deciding upon treatment strategies for burn wounds. The literature typically highlights three key categories: vessel thrombosis caused by vascular damage, increased production of inflammatory mediators, and the presence of pro-apoptotic factors . Moreover, severe burns are characterized by a general systemic response, called systemic inflammatory response syndrome (SIRS), involving the modification of a range of cellular mediators, cytokines, pro-inflammatory molecules, and adipokines. SIRS often leads to multiple organ dysfunction syndrome, requiring a multidisciplinary approach in order to support vital organs and keep biological parameters within acceptable values . This unfavorable evolution can be monitored through the modification of various biochemical parameters, with recent studies suggesting that less commonly used markers, such as plasminogen activator inhibitor-1 (PAI-1) or platelet factor 4 (PF4), might be useful. PAI-1 and PF4 generate a procoagulant effect, with an upward trend in the 48 h and 24 h post-burn, respectively, increasing the already high risk of deep venous thrombosis and pulmonary embolism . Effective management of burn wounds is essential to improving outcomes and minimizing the risk of wound conversion, where initially superficial burns deepen over time. This conversion process is determined by various factors, including delayed treatment, inadequate blood flow, as well as tissue damage from inflammation. Strategies to mitigate this progression focus on early and appropriate interventions that stabilize the wound environment, limit infection, and promote healing. Therapeutic approaches include optimizing wound debridement, applying topical agents to reduce bacterial load, and using advanced technologies such as biologic and skin substitutes to enhance tissue regeneration. Addressing the multifaceted nature of burn injuries requires a comprehensive and individualized treatment plan that integrates these modalities effectively . Burn victims are particularly vulnerable to infections, especially those caused by drug-resistant bacteria. These infections often lead to prolonged hospitalizations, slower wound healing, increased treatment costs, and higher mortality rates. Sepsis and multiorgan failure are the leading causes of death following severe burns, making infection prevention and management a critical aspect of burn care. Among the most frequently encountered bacteria, some are multidrug-resistant, extensively drug-resistant, or even pan-drug-resistant. The Infectious Disease Society of America has identified six bacteria, known as “ESKAPE pathogens” (Enterococcus faecalis, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, and Enterobacter spp.) that pose significant therapeutic challenges due to their growing resistance to antibiotics, even those considered last-line defenses. Early excision and autologous skin grafting are considered the gold standard for managing full-thickness and deep partial-thickness burns with expected healing exceeding 3 weeks, or placed in functional areas, where vicious scarring represents a significant problem. Wounds that fail to heal within three weeks, such as deep burns, experience prolonged inflammation and delayed re-epithelialization, which disrupt the normal healing process. This leads to excessive collagen production, impaired angiogenesis, and an altered extracellular matrix, all of which contribute to the increased risk of scarring. Thus, this approach is particularly effective because it reduces the risk of infection, promotes wound healing, and minimizes scarring. Debridement is mandatory, since removing burn eschar eliminates a major source of inflammation and a fertile environment for bacterial growth . For burn trauma survivors, the aftermath is often severe. According to a WHO report on child injury prevention, 8% of survivors are left with permanent physical disabilities, with the most frequent long-term effects including hypertrophic scars, keloids, contractures, and in some cases, the need for amputation. Beyond physical disabilities, burns have a significant emotional impact on survivors. The psychological effects depend not only on the characteristics of the burn but also, and more importantly, on factors like family support, mental health, and individual coping mechanisms. In high-income countries, patients from disadvantaged family situations may face challenges in returning to school or community life. In contrast, in low- and middle-income countries, survivors often struggle with social reintegration, face family abandonment, and encounter difficulties finding employment, which can lead to poverty . The aim of this narrative review is to summarize recent advancements in burn wound diagnosis, treatment modalities, and their potential to enhance patients’ outcomes. The pathophysiological response of burns at the skin level has different severity grades, with the extent of tissue damage depending on factors such as temperature, exposure duration, and the specific heat of the traumatic agent. In 1953, Dr. Douglas Jackson introduced a model of burn injury that describes three concentric zones of damage, which continues to be widely recognized today. The zone of coagulation is the most affected, where all cells are non-viable, and the extracellular matrix proteins are denatured due to direct contact with the burn-causing agent. The focus in this zone is on debridement and infection control, as the damaged cells cannot be restored. Surrounding this is the zone of stasis, which contains initially viable tissue which, however, suffers from hypoperfusion due to vasoconstriction. While recovery in this zone is possible, it may progress to necrosis within 24–48 h if hypoperfusion persists due to inadequate fluid resuscitation or in the presence of edema or infection. Factors such as advanced age, smoking, diabetes, and other chronic conditions can worsen this progression. Given the irreversibility of the coagulation zone, treatment aims to prevent the stasis zone from deteriorating into coagulative necrosis. The outermost zone, known as the zone of hyperemia, consists of viable cells and associated vasodilation due to inflammation and can fully recover within 7–10 days, provided hypoperfusion or infection is avoided . Burns are classified by depth corresponding to the skin layers. First-degree burns (superficial) are those affecting only the epidermis. These are red, painful, and heal on their own within 3–4 days, leaving no scars, such as those provoked by sunburn. Second-degree burns are subdivided into superficial partial-thickness and deep partial-thickness. Superficial partial-thickness (IIA degree) burns affect the epidermis and the papillary dermis and are red and painful with clear blisters. They heal in about 14 days, with minimal scarring. Deep partial-thickness burns (IIB degree) are those burns that penetrate deeper into the reticular dermis. They are pale with a spotted appearance, causing discomfort rather than pain, and may take up to 3 weeks to heal, but not through re-epithelization but rather by wound contracture, which is why surgical intervention is often preferred to prevent scarring and contractures. Third-degree or full-thickness burns affect all skin layers including the hypodermis. The skin appears white or charred, and there is no pain due to nerve damage. Healing is impossible without skin grafting. Lastly, fourth-degree burns, which are sometimes described in the literature, extend beyond the skin to muscle and bone. They often require extensive debridement, reconstruction, or even amputation . Initial Response in the First 48–72 h The first maneuver on a burn patient, not only at the site of the accident, but also on arrival in the emergency room of the hospital, is to check the vitals following the ABCDE guidelines: airway, breathing, circulation, disability, and exposure. Only after thoroughly resolving any issues regarding the patient’s vital signs can the medical team pursue the next steps in the management of the burn patient . All patients should be washed with antiseptic and antimicrobial solutions, undergo an initial mechanical debridement in the burn center, and then, some of them will require emergency surgery immediately after admission, usually consisting of decompression incisions. That is most frequently the case for circumferential or near-circumferential full-thickness burns, where the formation of the eschar leads to a decrease in skin distensibility, which is exacerbated by the normal edematous reaction of the tissues and, furthermore, by the necessary fluid resuscitation. Thus, the pressure rises inside the soft tissues, leading to compartment syndrome in the extremities, respiratory restriction if the thorax or abdomen is involved, or even intraocular pressure in patients with periorbital burns. The medical team should assess the patient and establish the need for immediate surgery as soon as possible, bearing in mind that the clinical signs of compartment syndrome might be delayed, so the indication for escharotomy can be extended prophylactically in patients who need massive fluid resuscitation . In most traumatic injuries, the classic signs of compartment syndrome (pain, pallor, pulselessness, paresthesia, paralysis) indicate the need for a fasciotomy. However, in thermal burns, the full-thickness inelastic eschar is the one that restricts the tissue expansion and not the fascia. Hence, fasciotomies are relatively unusual for thermal burns, but they must be considered in cases of delayed treatment. On the other hand, fasciotomies are the standard of care in electrical burns, where the damage caused by the high voltage affects the muscles. Early fasciotomies have been shown to decrease the need for limb amputations . However, amputations can be lifesaving for the burn patient, and it has been shown that the earlier the amputation is performed, the lower the level of the amputation. The suggested window for evaluating muscle viability is 3–5 days after injury and, should the muscles be found to be no longer viable, the prior fasciotomy and the following early amputation might spare a major articulation . The first maneuver on a burn patient, not only at the site of the accident, but also on arrival in the emergency room of the hospital, is to check the vitals following the ABCDE guidelines: airway, breathing, circulation, disability, and exposure. Only after thoroughly resolving any issues regarding the patient’s vital signs can the medical team pursue the next steps in the management of the burn patient . All patients should be washed with antiseptic and antimicrobial solutions, undergo an initial mechanical debridement in the burn center, and then, some of them will require emergency surgery immediately after admission, usually consisting of decompression incisions. That is most frequently the case for circumferential or near-circumferential full-thickness burns, where the formation of the eschar leads to a decrease in skin distensibility, which is exacerbated by the normal edematous reaction of the tissues and, furthermore, by the necessary fluid resuscitation. Thus, the pressure rises inside the soft tissues, leading to compartment syndrome in the extremities, respiratory restriction if the thorax or abdomen is involved, or even intraocular pressure in patients with periorbital burns. The medical team should assess the patient and establish the need for immediate surgery as soon as possible, bearing in mind that the clinical signs of compartment syndrome might be delayed, so the indication for escharotomy can be extended prophylactically in patients who need massive fluid resuscitation . In most traumatic injuries, the classic signs of compartment syndrome (pain, pallor, pulselessness, paresthesia, paralysis) indicate the need for a fasciotomy. However, in thermal burns, the full-thickness inelastic eschar is the one that restricts the tissue expansion and not the fascia. Hence, fasciotomies are relatively unusual for thermal burns, but they must be considered in cases of delayed treatment. On the other hand, fasciotomies are the standard of care in electrical burns, where the damage caused by the high voltage affects the muscles. Early fasciotomies have been shown to decrease the need for limb amputations . However, amputations can be lifesaving for the burn patient, and it has been shown that the earlier the amputation is performed, the lower the level of the amputation. The suggested window for evaluating muscle viability is 3–5 days after injury and, should the muscles be found to be no longer viable, the prior fasciotomy and the following early amputation might spare a major articulation . 3.1. Burn Wound Assessment The depth of the burn wound must be accurately assessed, since its healing potential is directly dependent on the depth of the lesion, which will guide the appropriate therapeutic strategy. Clinical examination was for a long time the only method for assessing burn wounds. However, despite its value, clinical examination has limitations, such as subjectivity and dependence on the experience and expertise of the examiner, which has led to the development and use of additional methods and modern technologies. Clinical evaluation is the first pillar of the assessment, but imaging technologies have recently advanced the assessment potential of the burn wound, improving the ability to non-invasively evaluate burns and closely monitor the healing progress . shows the chronology of burn wound treatment and the available and potential technologies that can be used. 3.1.1. Laser Doppler Imaging (LDI) Laser Doppler Imaging (LDI) is an advanced, non-invasive diagnostic tool, introduced by Essex and Bryne in 1991 , used in burn care to assess skin perfusion and determine burn depth. It works by directing a low-power laser beam onto the burn site, where the light is scattered by moving red blood cells in the microcirculation, producing a Doppler shift. This shift is used to generate real-time images that map out blood flow, which is crucial for assessing tissue viability. Studies have shown that LDI can differentiate between superficial partial-thickness burns, which exhibit increased blood flow, and deeper burns, such as full-thickness burns, where blood flow is compromised, as shown in . As a result, LDI is particularly useful for clinicians in making critical decisions regarding the need for excision or grafting, and it offers the potential to predict wound healing (time to complete re-epithelization) by monitoring perfusion levels during the early stages of burn injury. Additionally, it proves particularly beneficial in cases where clinical judgment may be limited, such as in patients with obscured injuries or those with dark skin tones, where other assessment methods might fall short . However, while LDI shows great promise, its use is not without limitations. For instance, its accuracy may be influenced by the stage of the burn and the underlying condition of the tissue, which could affect the perfusion readings. Furthermore, LDI technology requires significant expertise to interpret the results accurately, and it is not as widely available as other methods. Nonetheless, ongoing improvements in LDI technology and its integration with other diagnostic tools, such as hyperspectral imaging, may help to address these challenges . Mirdell et al. proved that two consecutive measurements, the first one in the first 24 h post-injury and the second one between 72 and 96 h post-injury, can reach 100% specificity and sensibility in predicting the prognosis of the burn wound, compared to only 67% for clinical evaluation . Several other studies have shown similar results, with LDI accuracy in the range of 90–100% and clinical accuracy between 50–75% . However, the device is costly, bulky, difficult to move, and requires significant time to use. Furthermore, patients must remain still during measurements to prevent artifacts, which can be especially difficult when assessing burn wounds in children . A faster LDI, called the Laser Doppler Line Scanner (LDLS), was found to be as accurate as the classic LDI. Compared to normal LDI, that can scan as much as 2500 cm 2 in 2 min, LDLS covers 300 cm 2 in only 4 s, enabling easier use, even in an outpatient setting . 3.1.2. Laser Speckle Imaging (LSI) Laser Speckle Imaging (LSI), first used in the 1980s using analog techniques for biomedical use , assesses burn depth by interpreting blood flow fluctuations in tissue illuminated by a laser. It uses changes in the speckle pattern to generate perfusion maps, correlating blood flow with burn severity. Studies show LSI accurately predicts healing times and demonstrates high sensitivity and specificity, even in varied clinical settings. Compared to Laser Doppler Imaging (LDI), LSI offers faster image capture, higher resolution, and affordability. However, both methods face limitations due to dynamic tissue changes and may not fully reflect actual blood perfusion . 3.1.3. Indocyanine Green (ICG) Fluorescence Imaging Indocyanine Green (ICG) fluorescence imaging is an emerging technique that enhances the visualization of blood flow in burn wounds. Fluorescein fluoresce was first used to assess burn wound depth in 1943 . With a spectral absorption of 800 nm, ICG can be detected under near-infrared light, which is used to excite the dye, making areas with good perfusion glow brightly. The dye is injected intravenously, where it binds to its transporter through the circulatory system, albumin. ICG imaging helps assess the viability of burn tissue and can be used intraoperatively to guide surgical debridement and grafting. It provides real-time feedback, allowing surgeons to assess tissue perfusion immediately during surgery . A multicentric, triple-blind study that compared the results of clinical evaluation and those provided by ICG imaging found a 100% specificity and sensibility for the latter . Superficial partial-thickness burns exhibit fast uptake and appear bright and homogenous, as the vascularization is still intact, while deep partial-thickness burns appear darker, with a mottled fluorescence and show slower uptake, since the dermal plexus is partially destroyed. Lastly, full-thickness burns will exhibit fluorescence only in large vessels, since the entire dermal plexus is damaged . 3.1.4. Thermography Thermography, first introduced in 1961, uses infrared radiation to create heat maps of the skin, detecting subtle changes in temperature that correspond to blood flow and tissue viability. In burn care, thermography helps assess tissue perfusion, identifying areas of deep burn or ischemia . Static infrared imaging determines burn depth by identifying temperature variations—full-thickness burns appear cooler than healthy skin or superficial burns because blood vessels are damaged in these areas (as shown in ). In contrast, active infrared imaging applies cold stimulation before observation to measure the recovery time to normal temperatures: superficial burns recover more quickly than full-thickness burns. Both active and static imaging are quick, noninvasive methods that provide real-time results within minutes. Their accuracy surpasses that of clinical evaluations, with active imaging achieving an accuracy rate of 83% . Unlike LDI, thermal imagers are affordable, user-friendly, compact, and can easily connect to mobile phones and tablets, capturing images within seconds. These devices can be especially useful in situations with limited consultation time or when LDI is unavailable or impractical . 3.1.5. Other Methods Optical coherence tomography (OCT) has been successfully utilized for characterizing burn injuries. OCT detects changes in the polarization of light reflected from burn tissue, enabling measurements that aid in assessing tissue structure and function, ultimately helping with the evaluation of wound depth. A reduction in collagen birefringence is thought to correlate with increased burn depth. However, OCT cannot reliably assess superficial burns in the epidermis due to its limited resolution, which is insufficient for visualizing individual cells and accurately detecting tissue damage, and it does not effectively measure skin perfusion . OCT-based angiography (OCTA) is an advanced version of OCT that visualizes functional microvascular networks below the tissue surface. Unlike earlier versions of OCT, OCTA can capture and measure functional blood flow, fluid accumulation, and structural details all within a single scan . Photoacoustic imaging (PAI) is a promising biomedical imaging technology, first described by Alexander Graham Bell as a conversion of optical energy to audible pressure waves , with greater penetration depth than optical imaging. With its high contrast and deep penetration, PAI has shown value for imaging deeper tissues. PA signals correlate directly with blood perfusion, which decreases in burned tissue due to blood vessel occlusion. However, this approach only accounts for blood distribution’s effect on PA signal intensity, which does not accurately represent actual blood flow. Additionally, because PAI relies on light absorption, it cannot provide information on light scattering, which is important for assessing the skin’s scattering structure and identifying the extent of the burn wound . Near-infrared spectroscopy is a non-invasive technique used to monitor hemodynamic changes by measuring concentrations of oxyhemoglobin and deoxyhemoglobin in the blood, based on their capacity to reflect light within the near-infrared range (700–900 nm), where most tissues are relatively transparent . NIRS demonstrated a significant increase in oxygen saturation and total hemoglobin in superficial burn wounds compared to control areas, while full-thickness burns exhibited a reduction in oxygenation and total hemoglobin levels. Moreover, local oxygen saturation positively correlates with blood flow, as measured by LDI . Hyperspectral imaging (HSI) is a cutting-edge technology that captures a wide range of spectral data from burn wounds, providing valuable insights into tissue characteristics such as oxygenation, blood perfusion, and hemoglobin levels. The first study describing burn wounds through the use of HIS was published in 2015. By analyzing the spectral signatures of the burn site, HSI allows for precise differentiation between various burn depths, including superficial and deep injuries. Studies have shown that HSI can detect changes in the optical properties of burned tissues, allowing clinicians to assess burn severity and predict healing potential with greater accuracy compared to traditional visual inspection methods . The clinical application of HSI has expanded in recent years, particularly for assessing burns in specific anatomical areas, such as the hands and upper extremities. HSI improves the early identification of areas that may require surgical intervention, such as excision and grafting, by accurately determining the extent of ischemia and tissue damage. In addition to burn severity assessment, HSI has shown potential in predicting wound healing outcomes, offering a significant advantage in treatment planning. HSI allows clinicians to assess the viability of burn areas and predict which regions will likely heal naturally versus those that may require more intensive treatments, aiding in the evaluation of healing progression and assisting in the early detection of complications, such as infection or delayed healing . Spatial Frequency Domain Imaging (SFDI) is an emerging, non-invasive optical technique used for assessing burn wounds by analyzing tissue properties such as oxygenation, perfusion, and depth. SFDI uses modulated light to penetrate tissue and measures the variations in the light’s interaction with tissue at different spatial frequencies. This technique offers high spatial and temporal resolution, allowing clinicians to precisely assess burn depth, monitor healing, and identify areas requiring surgical intervention, such as excision or grafting. Recent studies have demonstrated SFDI’s ability to classify burn severity and predict healing outcomes, providing a more objective, real-time assessment compared to traditional visual inspection methods . In addition to its ability to assess burn wounds, SFDI has shown promise for monitoring skin graft healing. By providing detailed images of tissue perfusion and oxygenation, SFDI aids clinicians in evaluating the viability of grafts and detecting early signs of complications like ischemia or infection . Furthermore, the integration of machine learning with SFDI data, as demonstrated in various studies, allows for the development of predictive models that can improve the accuracy of burn classification and guide treatment decisions . In recent years, technological advancements have pushed forward the integration of new technologies into medicine, including in the assessment of burn depth. Multiple studies have successfully demonstrated the use of various machine learning (ML) algorithms that autonomously analyze optical parameters from imaging methods to predict burn depth with accuracies up to 92.5%, which is close to that of histopathological evaluation . As expected, each study uses different imaging techniques, such as ultrasound imaging, OCT, NIRS, or digital photography. The most frequent ML algorithms in these studies are the support vector machine (SVM) and convolutional neural network (CNN) . Additionally, artificial intelligence (AI) systems have been developed to estimate injury depth, achieving accuracy rates of up to 80–95%, which can prove more than useful in settings where there are no specialized burn centers in the proximity. AI could outperform traditional clinical methods in predicting burn depth and healing outcomes. For instance, burn depth predictions using AI algorithms like neural networks were found to improve significantly when dealing with multiclass categorization problems, mapping them onto simpler binary classifications . AI studies showed good efficacy in estimating burn depth even when the data were provided in the form of mobile-captured images with potential background noise, showing strong correlations with LDI-based assessments and providing an accessible and efficient alternative for burn evaluation, especially useful in situations with limited resources . However, there is some variability in AI model performance, particularly related to dataset diversity and quality, which can impact its reliability. Additionally, the integration of AI into real-world clinical practice still requires overcoming hurdles such as clinician trust and regulatory approval . AI systems are expected to evolve toward providing more precise, individualized assessments by integrating multiple data sources, such as medical imaging, clinical history, and patient-specific factors, including the integration of AI with real-time, non-invasive diagnostic tools. Future directions will likely focus on improving the ability to predict burn healing trajectories, including determining whether a wound will heal through secondary intention or require surgical intervention (e.g., excision or grafting). These predictions could be made with increasing accuracy, using advanced machine learning algorithms and deeper integration of temporal data to track healing progress over time . 3.2. Burn Wound Debridement 3.2.1. Early Surgical Excision Since the burn eschar is prone to infection and promotes the hypermetabolic state of the patient, the standard of care of burn wounds is early excision, which reduces mortality and hospital length of stay, followed by grafting using autografts . Tangential excision is the most used method for debriding burn wounds. This technique involves gradually removing tissue layers until reaching healthy tissue suitable for grafting, which can be identified either by pinpoint bleeding at the dermal level or by the presence of adequately vascularized adipose tissue. Although it may result in more significant bleeding, tangential excision offers superior aesthetic and functional outcomes compared to fascial excision . Fascial excision involves removing all of the affected tissue down to the fascia, using electrocautery. This technique is employed for full-thickness burns to prevent massive bleeding and to better control infection, if present, despite drawbacks such as lymphedema and a less aesthetic outcome . 3.2.2. Hydrosurgical Excision An alternative approach for debriding burn injuries is the Versajet ® hydrosurgical system, which uses a high-pressure saline jet that acts like a scalpel, based on the Venturi effect, both removing and suctioning necrotic tissue from the wound at the same time . This technique offers superior precision and control compared to traditional excision, and it has been shown to be safe, provide effective debridement, and preserve viable tissues, making it suitable even for sensitive areas such as the hands or face. The Versajet ® system is preferred for partial-thickness burns, as it is less effective for debriding full-thickness burns. However, these same characteristics make it less advantageous for large areas that require rapid debridement . Hydrosurgical excision combined with skin grafting decreases intraoperative blood loss per unit area of grafted skin, improves scar quality one year after the injury, does not raise treatment costs per unit of burned area, and has similar complication rates with normal surgical excision . 3.2.3. Enzymatic Debridement As the name suggests, this debridement technique uses proteolytic enzymes to break down burned and devitalized tissue without affecting healthy tissue. The enzymatic debridement process involves initially cleaning away all non-viable tissue debris, applying saline and chlorhexidine to the burned areas, followed by the enzymatic product for 4 h, after which it is removed, and cleaning the wounds again with saline. Advantages include faster re-epithelialization, reduced healing time, fewer required surgical excisions, decreased need for autografting, and minimized bleeding. However, enzymatic debridement can cause local erythema and significantly increased pain, especially during application and removal of the product, which can be counteracted with sufficient analgesia . The European consensus on bromelain-based enzymatic debridement (Nexobrid ® ), derived from the stem of the pineapple plant, indicates that prior application of silver sulfadiazine is not recommended, due to interference with the enzymatic activity. This technique can be used on a maximum burn surface area of 15% TBSA in a single session. For circumferential burns, enzymatic debridement can be applied early on, potentially preventing the need for escharotomies, though not for fasciotomies. Enzymatic debridement is particularly beneficial in patients who are not candidates for surgery due to medical comorbidities or in cases where donor sites are limited . Enzymatic debridement (see ) offers several benefits, including reduced surgical trauma, faster wound bed preparation, and preservation of viable tissue. However, it may not be suitable for all burn wounds, especially those with extensive or deep eschar. In some cases, the use of enzymatic debridement may still need to be followed by surgical intervention if complete debridement is not achieved . Surgical excision, hydrosurgical excision, and enzymatic debridement are distinct approaches to burn wound management, each with specific strengths and limitations. Surgical excision, the traditional gold standard, is particularly effective for full-thickness and extensive burns, offering complete and immediate removal of necrotic tissue. In contrast, hydrosurgical excision provides a more precise and controlled method of debridement, preserving healthy tissue. It is particularly effective for partial-thickness burns and sensitive areas such as the face or hands. However, it is less efficient for large wounds requiring rapid debridement and for full-thickness burns, where surgical excision remains superior. Enzymatic debridement, distinct from the mechanical approaches of surgical and hydrosurgical methods, removes necrotic tissue while sparing healthy tissue. This method is less invasive, reducing surgical trauma and preserving the wound bed. Unlike surgical and hydrosurgical excision, enzymatic debridement does not require specialized equipment or a surgical theater but can be slower in addressing extensive wounds. Pain during application and local erythema are significant drawbacks. Hydrosurgical excision and enzymatic debridement both offer greater precision and tissue preservation compared to surgical excision, but their applications differ. Hydrosurgical systems are ideal for smaller, well-defined wounds and areas requiring detailed debridement, whereas enzymatic debridement excels in non-surgical candidates or where donor sites are limited. Despite their advantages, neither can match the speed and thoroughness of surgical excision for managing extensive or deep burns, where immediate preparation for grafting is essential. Overall, surgical excision provides the most definitive treatment for extensive wounds, while hydrosurgical excision and enzymatic debridement cater to specific needs, emphasizing precision, preservation, and reduced invasiveness. The choice between them depends on burn depth, extent, patient condition, and treatment goals, with their strategic use often enhancing overall outcomes. 3.3. Burn Wound Closure 3.3.1. Skin Autografts Skin autografts are divided into split-thickness and full-thickness grafts, depending on the harvested skin layers. The use of autografts becomes more challenging as burn wounds increase in size. Full-thickness grafts, containing both epidermis and dermis, provide superior elasticity, flexibility, and texture, offering better aesthetic results and scarring. However, their viability may be compromised due to the dermal thickness during revascularization, and the donor site often requires primary closure due to dermis removal, with limited anatomical donor areas (e.g., supraclavicular, retroauricular, inguinal). Full-thickness grafts are typically used for facial and hand defects . For larger burn areas, split-thickness skin grafts, which include the epidermis and part of the dermis, are more common. The donor area can be reused after healing, though repeated harvests may lead to hypopigmentation and increased contracture risk. For small lesions, unexpanded grafts yield better cosmetic outcomes but carry risks of seromas or hematomas . Instruments like manual dermatomes and modern rotary dermatomes are commonly used for harvesting STSGs. Rotary dermatomes are especially helpful in treating large burns because they offer speed and precision, while manual instruments like the Weck guarded Goulian knives and Watson knives are great for smaller or delicate areas, providing excellent control for harvesting thin skin layers or removing dead tissue . Expanding grafts is often necessary for large defects. A 2:1 expansion facilitates application and drainage but yields less aesthetic results due to perforations. Larger expansions, such as 4:1, cover extensive areas, often necessitating a sandwich technique with an overlaid allograft. For critical burn cases, even higher ratios may be used, though these are unsuitable for visible areas like the face and hands, where unexpanded grafts are preferable . The Meek micrografting technique divides a graft into small square fragments, achieving up to 9:1 expansion for coverage of extensive areas (up to 75% of total skin surface), with better viability due to lower metabolic demands and fewer interventions needed. Donor sites should ideally match the recipient area in pigmentation, with common donor areas including the thigh, hip, and gluteal region for easier harvesting and scar concealment . Traditional meshing devices create slits in the skin graft, allowing it to stretch and cover a larger area. However, Meek grafting offers greater uniformity and can achieve higher expansion ratios, making it particularly effective for extensive burns. These advancements in tools and techniques highlight the innovative methods being used in modern burn and wound care . 3.3.2. Temporary Skin Substitutes Allografts, derived from cadaveric skin, are biologic dressings that only temporarily cover the wound before it is ready for closure. They can revascularize when placed on an excised burn wound but are eventually rejected after 3–4 weeks due to their non-self-antigenic properties. They serve as temporary coverage without needing immunosuppressants and prevent the desiccation of tissues. Moreover, allografts help the surgeon to evaluate the suitability for later autografting . Xenografts, primarily porcine and bovine xenografts, provide temporary wound coverage, reduce pain, and protect against infection, though they do not vascularize, but only adhere to the wound bed. Thus, they are mainly used for partial-thickness burns and donor sites . Recently, newer fish-based xenografts have caught more attention because of their capacity to offer anti-inflammatory benefits and higher porosity, supporting fibroblast activity and resisting bacterial invasion for short periods . The human amniotic membrane (HAM) is a valuable option for temporary wound coverage, especially for burns and trauma on irregular surfaces like the face. Its multi-layered structure consists of an epithelium, which contains pluripotent stem cells, a basal membrane rich in extracellular matrix proteins like collagen and fibronectin, and additional stromal layers containing regenerative molecules. HAM provides essential wound healing benefits, including immunomodulatory effects, anti-scarring, antimicrobial properties, and tissue regeneration. Its flexibility makes it suitable for one-time application, potentially reducing pain and increasing comfort for patients . Despite these advantages, it presents challenges due to storage requirements, immediate availability, and risks of disease transmission. To address these, human acellular amniotic membrane (HAAM) was developed, utilizing infrared and microwave irradiation drying and gamma-ray sterilization, making it easier to store and handle while removing viable cells to minimize immune response . Synthetic skin substitutes (see ) provide sterile, consistent wound coverage, absorbing exudates and preventing infection when carefully monitored, but also maintain an appropriately moist environment for healing. They can be polyurethane-based dressings, silicone-based dressings, hydrocolloid dressings, hydrofiber dressings, alginate dressings or based on other synthetic materials (polylactic acid membranes or polyester mesh impregnated with petroleum jelly, etc.) . 3.3.3. Permanent Skin Substitutes Ideally, the best solution for covering burn wounds would be a product that mimics skin properties while remaining integrated during healing. Most of the available substitutes mainly mimic dermal elasticity and require subsequent split-thickness skin grafting . Acellular Dermal Matrices (ADMs) are scaffolds that promote tissue regeneration by providing a structure for cell growth and integration. Used in both acute and reconstructive burn care, ADMs are valuable for cases where autografts are not feasible, especially in full-thickness burns. Combined with thin split-thickness grafts, ADMs have shown improved functional and cosmetic results, reduced scar contracture, and enhanced graft integration . AlloDerm ® is an allogenic dermis derived from decellularized and de-epithelialized human cadaver skin, by undergoing freeze-drying to eliminate antigenicity and prevent rejection. Since it lacks an epidermal barrier, it must be paired with a thin epithelial autograft when applied on the wound bed . Integra ® consists of a temporary silicone epidermal layer and a bovine collagen–chondroitin matrix for permanent dermal replacement, allowing fibroblast infiltration and revascularization. After 2–3 weeks, once the dermal matrix is integrated, the silicone layer can be replaced with an autograft. The epidermal layer serves as a barrier against infection and dehydration, while the dermal layer promotes the synthesis of a new dermal layer. In severe burns, Integra ® can integrate over poorly vascularized areas like tendons, where skin grafts would not adhere. Histological studies show Integra ® resembles natural skin closely, with better elasticity . While Integra ® requires a two-step process, other dermal substitutes allow for a single-step procedure. Matriderm ® , the first to do this, consists of a freeze-dried scaffold of bovine collagen and elastin, applied directly to the wound bed or after wetting with saline. Clinical studies note that graft integration can be slower, likely due to the interposition of the scaffold between the graft and the wound bed, but, on the other hand, scar quality proves to be superior . Novosorb ® Biodegradable Temporizing Matrix is a synthetic, acellular dermal substitute used in the management of burns, trauma wounds, and other complex skin defects. Unlike biologic substitutes like AlloDerm and Integra, BTM is fully synthetic, composed of a porous polyurethane foam matrix topped with a temporary sealing layer. The matrix serves as a scaffold for cellular infiltration and vascularization, promoting dermal regeneration, while the sealing layer protects the wound during the healing process. Once the new dermal layer forms, the sealing layer is removed, and a split-thickness skin graft is applied. BTM offers advantages such as a long shelf life, resistance to immunogenic reactions, and ease of use, making it a valuable tool in modern burn care and reconstructive surgery . An ideal skin replacement method would replace both dermal and epidermal layers simultaneously. A logical approach involves combining epithelial cells with a dermal scaffold in the lab. Several research teams have developed composite skin substitutes, such as Orcel ® , a bilayered substitute made of fibroblasts and keratinocytes, but its allogenic cells do not survive long in the wound bed . Similarly, Apligraf ® is made of cultured fibroblasts in a bovine collagen matrix, topped with cultured keratinocytes. But since these are human foreskin-derived neonatal cells, which are allogenic, they will eventually be rejected, limiting Apligraf ® ’s use to chronic wound healing . Recent studies have also explored full skin substitutes using autologous cells. While promising, these techniques face significant challenges, such as lengthy culture times, high costs, and the potential absence of pigment cells. Regulatory hurdles further complicate their application, limiting their practical use in clinical settings . 3.3.4. Cell-Based Therapies Recent advancements in burn wound management highlight the effectiveness of integrating cellular therapies and dermal substitutes to enhance healing and improve aesthetic outcomes. Studies have underscored the importance of cultured epidermal cells in burn treatment. Cultured epithelial autografts allow for the rapid expansion of a patient’s skin cells, offering a valuable solution for patients with extensive burns. This technique has several disadvantages, such as the fragility of keratinocyte layers, which makes them challenging to handle, high graft failure rates on areas like the lower back and buttocks, extended hospitalization, susceptibility to infection, and higher costs . Since keratinocytes alone do not replace the dermis, additional methods must be associated. Studies show variability in the survival of cultured keratinocyte grafts, with improved outcomes when autografts are combined with cultivated grafts—achieving graft adherence rates between 73% and 96% . Epicel ® , a cultured epidermal autograft (CEA), is produced from human keratinocytes grown on a fibrin mesh scaffold. Despite its effectiveness, Epicel ® ’s use is restricted to extensive full-thickness burns covering over 30% of the body surface area due to its high costs, fragile nature, infection susceptibility, and demanding postoperative care requirements. Typically, it is combined with a permanent dermal substitute, although its practicality for dermatological applications is often limited by these same challenges . A more recent alternative is a suspension of autologous keratinocytes, an FDA-approved device (ReCell ® ). This allows for the creation of a solution of the patient’s skin cells, harvested from a small skin biopsy, which is then sprayed over the burn wound. This method promotes epithelialization and accelerates wound closure without the need for extensive skin grafts. A biopsy of up to 4 cm 2 can be turned into enough sprayable product to treat an area of 320 cm 2 . Clinical trials have shown that this approach significantly reduces healing time, improves pigmentation, and decreases scarring compared to traditional skin grafts . Adipose-derived stem cells (ADSCs), which have a similar differentiation capacity to mesenchymal stem cells, applied on top of an ADM significantly accelerate post-burn wound reconstruction. ADSCs promote faster revascularization and cellular integration within the dermal matrix, yielding smoother skin texture and improved elasticity in treated areas, making it a promising approach for complex burn injuries. Adipose-derived stem cells (ADSCs) are particularly promising due to their availability from accessible tissue sources, through minimally invasive liposuction, or directly from the eschar. This ease of access and regenerative cell density makes ADSCs advantageous for wound healing and tissue repair applications . While challenges remain, such as prolonged preparation times, these approaches are becoming integral in managing severe burns, particularly for cases where donor skin is limited. These cellular therapies continue to shape the future of reconstructive burn care by enabling personalized and biologically compatible treatments. 3.3.5. Negative Pressure Wound Therapy Negative pressure wound therapy (NPWT), also known as vacuum-assisted closure (VAC), accelerates wound healing by promoting vascularization, granulation, and faster re-epithelialization, while also reducing bacterial infections and edema in wounds. NPWT has been studied as a dressing for acute burn care, an intermediate treatment before grafting, an adjunct for autografts, a support for dermal substitutes, and a dressing for donor sites . A study by Gümüş et al. demonstrated the efficacy of NPWT in managing high-voltage electrical burns, where traditional treatments often fail—an observation which was supported by subsequent studies from other centers . Studies show that NPWT applied on autografts can lead to graft integration rates of up to 97% . When used with ADMs, NPWT significantly improves outcomes, achieving a 96% success rate in dermal substitute integration with revascularization within one week, compared to the usual 2–3 weeks . showcases a summary of the therapeutic management of burn wounds throughout the three phases of evolution, each with its own specific particularities. It is important not to forget about pain management in this whole process, with drugs such as ibuprofen, acetaminophen or even opioids oftentimes being necessary. The depth of the burn wound must be accurately assessed, since its healing potential is directly dependent on the depth of the lesion, which will guide the appropriate therapeutic strategy. Clinical examination was for a long time the only method for assessing burn wounds. However, despite its value, clinical examination has limitations, such as subjectivity and dependence on the experience and expertise of the examiner, which has led to the development and use of additional methods and modern technologies. Clinical evaluation is the first pillar of the assessment, but imaging technologies have recently advanced the assessment potential of the burn wound, improving the ability to non-invasively evaluate burns and closely monitor the healing progress . shows the chronology of burn wound treatment and the available and potential technologies that can be used. 3.1.1. Laser Doppler Imaging (LDI) Laser Doppler Imaging (LDI) is an advanced, non-invasive diagnostic tool, introduced by Essex and Bryne in 1991 , used in burn care to assess skin perfusion and determine burn depth. It works by directing a low-power laser beam onto the burn site, where the light is scattered by moving red blood cells in the microcirculation, producing a Doppler shift. This shift is used to generate real-time images that map out blood flow, which is crucial for assessing tissue viability. Studies have shown that LDI can differentiate between superficial partial-thickness burns, which exhibit increased blood flow, and deeper burns, such as full-thickness burns, where blood flow is compromised, as shown in . As a result, LDI is particularly useful for clinicians in making critical decisions regarding the need for excision or grafting, and it offers the potential to predict wound healing (time to complete re-epithelization) by monitoring perfusion levels during the early stages of burn injury. Additionally, it proves particularly beneficial in cases where clinical judgment may be limited, such as in patients with obscured injuries or those with dark skin tones, where other assessment methods might fall short . However, while LDI shows great promise, its use is not without limitations. For instance, its accuracy may be influenced by the stage of the burn and the underlying condition of the tissue, which could affect the perfusion readings. Furthermore, LDI technology requires significant expertise to interpret the results accurately, and it is not as widely available as other methods. Nonetheless, ongoing improvements in LDI technology and its integration with other diagnostic tools, such as hyperspectral imaging, may help to address these challenges . Mirdell et al. proved that two consecutive measurements, the first one in the first 24 h post-injury and the second one between 72 and 96 h post-injury, can reach 100% specificity and sensibility in predicting the prognosis of the burn wound, compared to only 67% for clinical evaluation . Several other studies have shown similar results, with LDI accuracy in the range of 90–100% and clinical accuracy between 50–75% . However, the device is costly, bulky, difficult to move, and requires significant time to use. Furthermore, patients must remain still during measurements to prevent artifacts, which can be especially difficult when assessing burn wounds in children . A faster LDI, called the Laser Doppler Line Scanner (LDLS), was found to be as accurate as the classic LDI. Compared to normal LDI, that can scan as much as 2500 cm 2 in 2 min, LDLS covers 300 cm 2 in only 4 s, enabling easier use, even in an outpatient setting . 3.1.2. Laser Speckle Imaging (LSI) Laser Speckle Imaging (LSI), first used in the 1980s using analog techniques for biomedical use , assesses burn depth by interpreting blood flow fluctuations in tissue illuminated by a laser. It uses changes in the speckle pattern to generate perfusion maps, correlating blood flow with burn severity. Studies show LSI accurately predicts healing times and demonstrates high sensitivity and specificity, even in varied clinical settings. Compared to Laser Doppler Imaging (LDI), LSI offers faster image capture, higher resolution, and affordability. However, both methods face limitations due to dynamic tissue changes and may not fully reflect actual blood perfusion . 3.1.3. Indocyanine Green (ICG) Fluorescence Imaging Indocyanine Green (ICG) fluorescence imaging is an emerging technique that enhances the visualization of blood flow in burn wounds. Fluorescein fluoresce was first used to assess burn wound depth in 1943 . With a spectral absorption of 800 nm, ICG can be detected under near-infrared light, which is used to excite the dye, making areas with good perfusion glow brightly. The dye is injected intravenously, where it binds to its transporter through the circulatory system, albumin. ICG imaging helps assess the viability of burn tissue and can be used intraoperatively to guide surgical debridement and grafting. It provides real-time feedback, allowing surgeons to assess tissue perfusion immediately during surgery . A multicentric, triple-blind study that compared the results of clinical evaluation and those provided by ICG imaging found a 100% specificity and sensibility for the latter . Superficial partial-thickness burns exhibit fast uptake and appear bright and homogenous, as the vascularization is still intact, while deep partial-thickness burns appear darker, with a mottled fluorescence and show slower uptake, since the dermal plexus is partially destroyed. Lastly, full-thickness burns will exhibit fluorescence only in large vessels, since the entire dermal plexus is damaged . 3.1.4. Thermography Thermography, first introduced in 1961, uses infrared radiation to create heat maps of the skin, detecting subtle changes in temperature that correspond to blood flow and tissue viability. In burn care, thermography helps assess tissue perfusion, identifying areas of deep burn or ischemia . Static infrared imaging determines burn depth by identifying temperature variations—full-thickness burns appear cooler than healthy skin or superficial burns because blood vessels are damaged in these areas (as shown in ). In contrast, active infrared imaging applies cold stimulation before observation to measure the recovery time to normal temperatures: superficial burns recover more quickly than full-thickness burns. Both active and static imaging are quick, noninvasive methods that provide real-time results within minutes. Their accuracy surpasses that of clinical evaluations, with active imaging achieving an accuracy rate of 83% . Unlike LDI, thermal imagers are affordable, user-friendly, compact, and can easily connect to mobile phones and tablets, capturing images within seconds. These devices can be especially useful in situations with limited consultation time or when LDI is unavailable or impractical . 3.1.5. Other Methods Optical coherence tomography (OCT) has been successfully utilized for characterizing burn injuries. OCT detects changes in the polarization of light reflected from burn tissue, enabling measurements that aid in assessing tissue structure and function, ultimately helping with the evaluation of wound depth. A reduction in collagen birefringence is thought to correlate with increased burn depth. However, OCT cannot reliably assess superficial burns in the epidermis due to its limited resolution, which is insufficient for visualizing individual cells and accurately detecting tissue damage, and it does not effectively measure skin perfusion . OCT-based angiography (OCTA) is an advanced version of OCT that visualizes functional microvascular networks below the tissue surface. Unlike earlier versions of OCT, OCTA can capture and measure functional blood flow, fluid accumulation, and structural details all within a single scan . Photoacoustic imaging (PAI) is a promising biomedical imaging technology, first described by Alexander Graham Bell as a conversion of optical energy to audible pressure waves , with greater penetration depth than optical imaging. With its high contrast and deep penetration, PAI has shown value for imaging deeper tissues. PA signals correlate directly with blood perfusion, which decreases in burned tissue due to blood vessel occlusion. However, this approach only accounts for blood distribution’s effect on PA signal intensity, which does not accurately represent actual blood flow. Additionally, because PAI relies on light absorption, it cannot provide information on light scattering, which is important for assessing the skin’s scattering structure and identifying the extent of the burn wound . Near-infrared spectroscopy is a non-invasive technique used to monitor hemodynamic changes by measuring concentrations of oxyhemoglobin and deoxyhemoglobin in the blood, based on their capacity to reflect light within the near-infrared range (700–900 nm), where most tissues are relatively transparent . NIRS demonstrated a significant increase in oxygen saturation and total hemoglobin in superficial burn wounds compared to control areas, while full-thickness burns exhibited a reduction in oxygenation and total hemoglobin levels. Moreover, local oxygen saturation positively correlates with blood flow, as measured by LDI . Hyperspectral imaging (HSI) is a cutting-edge technology that captures a wide range of spectral data from burn wounds, providing valuable insights into tissue characteristics such as oxygenation, blood perfusion, and hemoglobin levels. The first study describing burn wounds through the use of HIS was published in 2015. By analyzing the spectral signatures of the burn site, HSI allows for precise differentiation between various burn depths, including superficial and deep injuries. Studies have shown that HSI can detect changes in the optical properties of burned tissues, allowing clinicians to assess burn severity and predict healing potential with greater accuracy compared to traditional visual inspection methods . The clinical application of HSI has expanded in recent years, particularly for assessing burns in specific anatomical areas, such as the hands and upper extremities. HSI improves the early identification of areas that may require surgical intervention, such as excision and grafting, by accurately determining the extent of ischemia and tissue damage. In addition to burn severity assessment, HSI has shown potential in predicting wound healing outcomes, offering a significant advantage in treatment planning. HSI allows clinicians to assess the viability of burn areas and predict which regions will likely heal naturally versus those that may require more intensive treatments, aiding in the evaluation of healing progression and assisting in the early detection of complications, such as infection or delayed healing . Spatial Frequency Domain Imaging (SFDI) is an emerging, non-invasive optical technique used for assessing burn wounds by analyzing tissue properties such as oxygenation, perfusion, and depth. SFDI uses modulated light to penetrate tissue and measures the variations in the light’s interaction with tissue at different spatial frequencies. This technique offers high spatial and temporal resolution, allowing clinicians to precisely assess burn depth, monitor healing, and identify areas requiring surgical intervention, such as excision or grafting. Recent studies have demonstrated SFDI’s ability to classify burn severity and predict healing outcomes, providing a more objective, real-time assessment compared to traditional visual inspection methods . In addition to its ability to assess burn wounds, SFDI has shown promise for monitoring skin graft healing. By providing detailed images of tissue perfusion and oxygenation, SFDI aids clinicians in evaluating the viability of grafts and detecting early signs of complications like ischemia or infection . Furthermore, the integration of machine learning with SFDI data, as demonstrated in various studies, allows for the development of predictive models that can improve the accuracy of burn classification and guide treatment decisions . In recent years, technological advancements have pushed forward the integration of new technologies into medicine, including in the assessment of burn depth. Multiple studies have successfully demonstrated the use of various machine learning (ML) algorithms that autonomously analyze optical parameters from imaging methods to predict burn depth with accuracies up to 92.5%, which is close to that of histopathological evaluation . As expected, each study uses different imaging techniques, such as ultrasound imaging, OCT, NIRS, or digital photography. The most frequent ML algorithms in these studies are the support vector machine (SVM) and convolutional neural network (CNN) . Additionally, artificial intelligence (AI) systems have been developed to estimate injury depth, achieving accuracy rates of up to 80–95%, which can prove more than useful in settings where there are no specialized burn centers in the proximity. AI could outperform traditional clinical methods in predicting burn depth and healing outcomes. For instance, burn depth predictions using AI algorithms like neural networks were found to improve significantly when dealing with multiclass categorization problems, mapping them onto simpler binary classifications . AI studies showed good efficacy in estimating burn depth even when the data were provided in the form of mobile-captured images with potential background noise, showing strong correlations with LDI-based assessments and providing an accessible and efficient alternative for burn evaluation, especially useful in situations with limited resources . However, there is some variability in AI model performance, particularly related to dataset diversity and quality, which can impact its reliability. Additionally, the integration of AI into real-world clinical practice still requires overcoming hurdles such as clinician trust and regulatory approval . AI systems are expected to evolve toward providing more precise, individualized assessments by integrating multiple data sources, such as medical imaging, clinical history, and patient-specific factors, including the integration of AI with real-time, non-invasive diagnostic tools. Future directions will likely focus on improving the ability to predict burn healing trajectories, including determining whether a wound will heal through secondary intention or require surgical intervention (e.g., excision or grafting). These predictions could be made with increasing accuracy, using advanced machine learning algorithms and deeper integration of temporal data to track healing progress over time . Laser Doppler Imaging (LDI) is an advanced, non-invasive diagnostic tool, introduced by Essex and Bryne in 1991 , used in burn care to assess skin perfusion and determine burn depth. It works by directing a low-power laser beam onto the burn site, where the light is scattered by moving red blood cells in the microcirculation, producing a Doppler shift. This shift is used to generate real-time images that map out blood flow, which is crucial for assessing tissue viability. Studies have shown that LDI can differentiate between superficial partial-thickness burns, which exhibit increased blood flow, and deeper burns, such as full-thickness burns, where blood flow is compromised, as shown in . As a result, LDI is particularly useful for clinicians in making critical decisions regarding the need for excision or grafting, and it offers the potential to predict wound healing (time to complete re-epithelization) by monitoring perfusion levels during the early stages of burn injury. Additionally, it proves particularly beneficial in cases where clinical judgment may be limited, such as in patients with obscured injuries or those with dark skin tones, where other assessment methods might fall short . However, while LDI shows great promise, its use is not without limitations. For instance, its accuracy may be influenced by the stage of the burn and the underlying condition of the tissue, which could affect the perfusion readings. Furthermore, LDI technology requires significant expertise to interpret the results accurately, and it is not as widely available as other methods. Nonetheless, ongoing improvements in LDI technology and its integration with other diagnostic tools, such as hyperspectral imaging, may help to address these challenges . Mirdell et al. proved that two consecutive measurements, the first one in the first 24 h post-injury and the second one between 72 and 96 h post-injury, can reach 100% specificity and sensibility in predicting the prognosis of the burn wound, compared to only 67% for clinical evaluation . Several other studies have shown similar results, with LDI accuracy in the range of 90–100% and clinical accuracy between 50–75% . However, the device is costly, bulky, difficult to move, and requires significant time to use. Furthermore, patients must remain still during measurements to prevent artifacts, which can be especially difficult when assessing burn wounds in children . A faster LDI, called the Laser Doppler Line Scanner (LDLS), was found to be as accurate as the classic LDI. Compared to normal LDI, that can scan as much as 2500 cm 2 in 2 min, LDLS covers 300 cm 2 in only 4 s, enabling easier use, even in an outpatient setting . Laser Speckle Imaging (LSI), first used in the 1980s using analog techniques for biomedical use , assesses burn depth by interpreting blood flow fluctuations in tissue illuminated by a laser. It uses changes in the speckle pattern to generate perfusion maps, correlating blood flow with burn severity. Studies show LSI accurately predicts healing times and demonstrates high sensitivity and specificity, even in varied clinical settings. Compared to Laser Doppler Imaging (LDI), LSI offers faster image capture, higher resolution, and affordability. However, both methods face limitations due to dynamic tissue changes and may not fully reflect actual blood perfusion . Indocyanine Green (ICG) fluorescence imaging is an emerging technique that enhances the visualization of blood flow in burn wounds. Fluorescein fluoresce was first used to assess burn wound depth in 1943 . With a spectral absorption of 800 nm, ICG can be detected under near-infrared light, which is used to excite the dye, making areas with good perfusion glow brightly. The dye is injected intravenously, where it binds to its transporter through the circulatory system, albumin. ICG imaging helps assess the viability of burn tissue and can be used intraoperatively to guide surgical debridement and grafting. It provides real-time feedback, allowing surgeons to assess tissue perfusion immediately during surgery . A multicentric, triple-blind study that compared the results of clinical evaluation and those provided by ICG imaging found a 100% specificity and sensibility for the latter . Superficial partial-thickness burns exhibit fast uptake and appear bright and homogenous, as the vascularization is still intact, while deep partial-thickness burns appear darker, with a mottled fluorescence and show slower uptake, since the dermal plexus is partially destroyed. Lastly, full-thickness burns will exhibit fluorescence only in large vessels, since the entire dermal plexus is damaged . Thermography, first introduced in 1961, uses infrared radiation to create heat maps of the skin, detecting subtle changes in temperature that correspond to blood flow and tissue viability. In burn care, thermography helps assess tissue perfusion, identifying areas of deep burn or ischemia . Static infrared imaging determines burn depth by identifying temperature variations—full-thickness burns appear cooler than healthy skin or superficial burns because blood vessels are damaged in these areas (as shown in ). In contrast, active infrared imaging applies cold stimulation before observation to measure the recovery time to normal temperatures: superficial burns recover more quickly than full-thickness burns. Both active and static imaging are quick, noninvasive methods that provide real-time results within minutes. Their accuracy surpasses that of clinical evaluations, with active imaging achieving an accuracy rate of 83% . Unlike LDI, thermal imagers are affordable, user-friendly, compact, and can easily connect to mobile phones and tablets, capturing images within seconds. These devices can be especially useful in situations with limited consultation time or when LDI is unavailable or impractical . Optical coherence tomography (OCT) has been successfully utilized for characterizing burn injuries. OCT detects changes in the polarization of light reflected from burn tissue, enabling measurements that aid in assessing tissue structure and function, ultimately helping with the evaluation of wound depth. A reduction in collagen birefringence is thought to correlate with increased burn depth. However, OCT cannot reliably assess superficial burns in the epidermis due to its limited resolution, which is insufficient for visualizing individual cells and accurately detecting tissue damage, and it does not effectively measure skin perfusion . OCT-based angiography (OCTA) is an advanced version of OCT that visualizes functional microvascular networks below the tissue surface. Unlike earlier versions of OCT, OCTA can capture and measure functional blood flow, fluid accumulation, and structural details all within a single scan . Photoacoustic imaging (PAI) is a promising biomedical imaging technology, first described by Alexander Graham Bell as a conversion of optical energy to audible pressure waves , with greater penetration depth than optical imaging. With its high contrast and deep penetration, PAI has shown value for imaging deeper tissues. PA signals correlate directly with blood perfusion, which decreases in burned tissue due to blood vessel occlusion. However, this approach only accounts for blood distribution’s effect on PA signal intensity, which does not accurately represent actual blood flow. Additionally, because PAI relies on light absorption, it cannot provide information on light scattering, which is important for assessing the skin’s scattering structure and identifying the extent of the burn wound . Near-infrared spectroscopy is a non-invasive technique used to monitor hemodynamic changes by measuring concentrations of oxyhemoglobin and deoxyhemoglobin in the blood, based on their capacity to reflect light within the near-infrared range (700–900 nm), where most tissues are relatively transparent . NIRS demonstrated a significant increase in oxygen saturation and total hemoglobin in superficial burn wounds compared to control areas, while full-thickness burns exhibited a reduction in oxygenation and total hemoglobin levels. Moreover, local oxygen saturation positively correlates with blood flow, as measured by LDI . Hyperspectral imaging (HSI) is a cutting-edge technology that captures a wide range of spectral data from burn wounds, providing valuable insights into tissue characteristics such as oxygenation, blood perfusion, and hemoglobin levels. The first study describing burn wounds through the use of HIS was published in 2015. By analyzing the spectral signatures of the burn site, HSI allows for precise differentiation between various burn depths, including superficial and deep injuries. Studies have shown that HSI can detect changes in the optical properties of burned tissues, allowing clinicians to assess burn severity and predict healing potential with greater accuracy compared to traditional visual inspection methods . The clinical application of HSI has expanded in recent years, particularly for assessing burns in specific anatomical areas, such as the hands and upper extremities. HSI improves the early identification of areas that may require surgical intervention, such as excision and grafting, by accurately determining the extent of ischemia and tissue damage. In addition to burn severity assessment, HSI has shown potential in predicting wound healing outcomes, offering a significant advantage in treatment planning. HSI allows clinicians to assess the viability of burn areas and predict which regions will likely heal naturally versus those that may require more intensive treatments, aiding in the evaluation of healing progression and assisting in the early detection of complications, such as infection or delayed healing . Spatial Frequency Domain Imaging (SFDI) is an emerging, non-invasive optical technique used for assessing burn wounds by analyzing tissue properties such as oxygenation, perfusion, and depth. SFDI uses modulated light to penetrate tissue and measures the variations in the light’s interaction with tissue at different spatial frequencies. This technique offers high spatial and temporal resolution, allowing clinicians to precisely assess burn depth, monitor healing, and identify areas requiring surgical intervention, such as excision or grafting. Recent studies have demonstrated SFDI’s ability to classify burn severity and predict healing outcomes, providing a more objective, real-time assessment compared to traditional visual inspection methods . In addition to its ability to assess burn wounds, SFDI has shown promise for monitoring skin graft healing. By providing detailed images of tissue perfusion and oxygenation, SFDI aids clinicians in evaluating the viability of grafts and detecting early signs of complications like ischemia or infection . Furthermore, the integration of machine learning with SFDI data, as demonstrated in various studies, allows for the development of predictive models that can improve the accuracy of burn classification and guide treatment decisions . In recent years, technological advancements have pushed forward the integration of new technologies into medicine, including in the assessment of burn depth. Multiple studies have successfully demonstrated the use of various machine learning (ML) algorithms that autonomously analyze optical parameters from imaging methods to predict burn depth with accuracies up to 92.5%, which is close to that of histopathological evaluation . As expected, each study uses different imaging techniques, such as ultrasound imaging, OCT, NIRS, or digital photography. The most frequent ML algorithms in these studies are the support vector machine (SVM) and convolutional neural network (CNN) . Additionally, artificial intelligence (AI) systems have been developed to estimate injury depth, achieving accuracy rates of up to 80–95%, which can prove more than useful in settings where there are no specialized burn centers in the proximity. AI could outperform traditional clinical methods in predicting burn depth and healing outcomes. For instance, burn depth predictions using AI algorithms like neural networks were found to improve significantly when dealing with multiclass categorization problems, mapping them onto simpler binary classifications . AI studies showed good efficacy in estimating burn depth even when the data were provided in the form of mobile-captured images with potential background noise, showing strong correlations with LDI-based assessments and providing an accessible and efficient alternative for burn evaluation, especially useful in situations with limited resources . However, there is some variability in AI model performance, particularly related to dataset diversity and quality, which can impact its reliability. Additionally, the integration of AI into real-world clinical practice still requires overcoming hurdles such as clinician trust and regulatory approval . AI systems are expected to evolve toward providing more precise, individualized assessments by integrating multiple data sources, such as medical imaging, clinical history, and patient-specific factors, including the integration of AI with real-time, non-invasive diagnostic tools. Future directions will likely focus on improving the ability to predict burn healing trajectories, including determining whether a wound will heal through secondary intention or require surgical intervention (e.g., excision or grafting). These predictions could be made with increasing accuracy, using advanced machine learning algorithms and deeper integration of temporal data to track healing progress over time . 3.2.1. Early Surgical Excision Since the burn eschar is prone to infection and promotes the hypermetabolic state of the patient, the standard of care of burn wounds is early excision, which reduces mortality and hospital length of stay, followed by grafting using autografts . Tangential excision is the most used method for debriding burn wounds. This technique involves gradually removing tissue layers until reaching healthy tissue suitable for grafting, which can be identified either by pinpoint bleeding at the dermal level or by the presence of adequately vascularized adipose tissue. Although it may result in more significant bleeding, tangential excision offers superior aesthetic and functional outcomes compared to fascial excision . Fascial excision involves removing all of the affected tissue down to the fascia, using electrocautery. This technique is employed for full-thickness burns to prevent massive bleeding and to better control infection, if present, despite drawbacks such as lymphedema and a less aesthetic outcome . 3.2.2. Hydrosurgical Excision An alternative approach for debriding burn injuries is the Versajet ® hydrosurgical system, which uses a high-pressure saline jet that acts like a scalpel, based on the Venturi effect, both removing and suctioning necrotic tissue from the wound at the same time . This technique offers superior precision and control compared to traditional excision, and it has been shown to be safe, provide effective debridement, and preserve viable tissues, making it suitable even for sensitive areas such as the hands or face. The Versajet ® system is preferred for partial-thickness burns, as it is less effective for debriding full-thickness burns. However, these same characteristics make it less advantageous for large areas that require rapid debridement . Hydrosurgical excision combined with skin grafting decreases intraoperative blood loss per unit area of grafted skin, improves scar quality one year after the injury, does not raise treatment costs per unit of burned area, and has similar complication rates with normal surgical excision . 3.2.3. Enzymatic Debridement As the name suggests, this debridement technique uses proteolytic enzymes to break down burned and devitalized tissue without affecting healthy tissue. The enzymatic debridement process involves initially cleaning away all non-viable tissue debris, applying saline and chlorhexidine to the burned areas, followed by the enzymatic product for 4 h, after which it is removed, and cleaning the wounds again with saline. Advantages include faster re-epithelialization, reduced healing time, fewer required surgical excisions, decreased need for autografting, and minimized bleeding. However, enzymatic debridement can cause local erythema and significantly increased pain, especially during application and removal of the product, which can be counteracted with sufficient analgesia . The European consensus on bromelain-based enzymatic debridement (Nexobrid ® ), derived from the stem of the pineapple plant, indicates that prior application of silver sulfadiazine is not recommended, due to interference with the enzymatic activity. This technique can be used on a maximum burn surface area of 15% TBSA in a single session. For circumferential burns, enzymatic debridement can be applied early on, potentially preventing the need for escharotomies, though not for fasciotomies. Enzymatic debridement is particularly beneficial in patients who are not candidates for surgery due to medical comorbidities or in cases where donor sites are limited . Enzymatic debridement (see ) offers several benefits, including reduced surgical trauma, faster wound bed preparation, and preservation of viable tissue. However, it may not be suitable for all burn wounds, especially those with extensive or deep eschar. In some cases, the use of enzymatic debridement may still need to be followed by surgical intervention if complete debridement is not achieved . Surgical excision, hydrosurgical excision, and enzymatic debridement are distinct approaches to burn wound management, each with specific strengths and limitations. Surgical excision, the traditional gold standard, is particularly effective for full-thickness and extensive burns, offering complete and immediate removal of necrotic tissue. In contrast, hydrosurgical excision provides a more precise and controlled method of debridement, preserving healthy tissue. It is particularly effective for partial-thickness burns and sensitive areas such as the face or hands. However, it is less efficient for large wounds requiring rapid debridement and for full-thickness burns, where surgical excision remains superior. Enzymatic debridement, distinct from the mechanical approaches of surgical and hydrosurgical methods, removes necrotic tissue while sparing healthy tissue. This method is less invasive, reducing surgical trauma and preserving the wound bed. Unlike surgical and hydrosurgical excision, enzymatic debridement does not require specialized equipment or a surgical theater but can be slower in addressing extensive wounds. Pain during application and local erythema are significant drawbacks. Hydrosurgical excision and enzymatic debridement both offer greater precision and tissue preservation compared to surgical excision, but their applications differ. Hydrosurgical systems are ideal for smaller, well-defined wounds and areas requiring detailed debridement, whereas enzymatic debridement excels in non-surgical candidates or where donor sites are limited. Despite their advantages, neither can match the speed and thoroughness of surgical excision for managing extensive or deep burns, where immediate preparation for grafting is essential. Overall, surgical excision provides the most definitive treatment for extensive wounds, while hydrosurgical excision and enzymatic debridement cater to specific needs, emphasizing precision, preservation, and reduced invasiveness. The choice between them depends on burn depth, extent, patient condition, and treatment goals, with their strategic use often enhancing overall outcomes. Since the burn eschar is prone to infection and promotes the hypermetabolic state of the patient, the standard of care of burn wounds is early excision, which reduces mortality and hospital length of stay, followed by grafting using autografts . Tangential excision is the most used method for debriding burn wounds. This technique involves gradually removing tissue layers until reaching healthy tissue suitable for grafting, which can be identified either by pinpoint bleeding at the dermal level or by the presence of adequately vascularized adipose tissue. Although it may result in more significant bleeding, tangential excision offers superior aesthetic and functional outcomes compared to fascial excision . Fascial excision involves removing all of the affected tissue down to the fascia, using electrocautery. This technique is employed for full-thickness burns to prevent massive bleeding and to better control infection, if present, despite drawbacks such as lymphedema and a less aesthetic outcome . An alternative approach for debriding burn injuries is the Versajet ® hydrosurgical system, which uses a high-pressure saline jet that acts like a scalpel, based on the Venturi effect, both removing and suctioning necrotic tissue from the wound at the same time . This technique offers superior precision and control compared to traditional excision, and it has been shown to be safe, provide effective debridement, and preserve viable tissues, making it suitable even for sensitive areas such as the hands or face. The Versajet ® system is preferred for partial-thickness burns, as it is less effective for debriding full-thickness burns. However, these same characteristics make it less advantageous for large areas that require rapid debridement . Hydrosurgical excision combined with skin grafting decreases intraoperative blood loss per unit area of grafted skin, improves scar quality one year after the injury, does not raise treatment costs per unit of burned area, and has similar complication rates with normal surgical excision . As the name suggests, this debridement technique uses proteolytic enzymes to break down burned and devitalized tissue without affecting healthy tissue. The enzymatic debridement process involves initially cleaning away all non-viable tissue debris, applying saline and chlorhexidine to the burned areas, followed by the enzymatic product for 4 h, after which it is removed, and cleaning the wounds again with saline. Advantages include faster re-epithelialization, reduced healing time, fewer required surgical excisions, decreased need for autografting, and minimized bleeding. However, enzymatic debridement can cause local erythema and significantly increased pain, especially during application and removal of the product, which can be counteracted with sufficient analgesia . The European consensus on bromelain-based enzymatic debridement (Nexobrid ® ), derived from the stem of the pineapple plant, indicates that prior application of silver sulfadiazine is not recommended, due to interference with the enzymatic activity. This technique can be used on a maximum burn surface area of 15% TBSA in a single session. For circumferential burns, enzymatic debridement can be applied early on, potentially preventing the need for escharotomies, though not for fasciotomies. Enzymatic debridement is particularly beneficial in patients who are not candidates for surgery due to medical comorbidities or in cases where donor sites are limited . Enzymatic debridement (see ) offers several benefits, including reduced surgical trauma, faster wound bed preparation, and preservation of viable tissue. However, it may not be suitable for all burn wounds, especially those with extensive or deep eschar. In some cases, the use of enzymatic debridement may still need to be followed by surgical intervention if complete debridement is not achieved . Surgical excision, hydrosurgical excision, and enzymatic debridement are distinct approaches to burn wound management, each with specific strengths and limitations. Surgical excision, the traditional gold standard, is particularly effective for full-thickness and extensive burns, offering complete and immediate removal of necrotic tissue. In contrast, hydrosurgical excision provides a more precise and controlled method of debridement, preserving healthy tissue. It is particularly effective for partial-thickness burns and sensitive areas such as the face or hands. However, it is less efficient for large wounds requiring rapid debridement and for full-thickness burns, where surgical excision remains superior. Enzymatic debridement, distinct from the mechanical approaches of surgical and hydrosurgical methods, removes necrotic tissue while sparing healthy tissue. This method is less invasive, reducing surgical trauma and preserving the wound bed. Unlike surgical and hydrosurgical excision, enzymatic debridement does not require specialized equipment or a surgical theater but can be slower in addressing extensive wounds. Pain during application and local erythema are significant drawbacks. Hydrosurgical excision and enzymatic debridement both offer greater precision and tissue preservation compared to surgical excision, but their applications differ. Hydrosurgical systems are ideal for smaller, well-defined wounds and areas requiring detailed debridement, whereas enzymatic debridement excels in non-surgical candidates or where donor sites are limited. Despite their advantages, neither can match the speed and thoroughness of surgical excision for managing extensive or deep burns, where immediate preparation for grafting is essential. Overall, surgical excision provides the most definitive treatment for extensive wounds, while hydrosurgical excision and enzymatic debridement cater to specific needs, emphasizing precision, preservation, and reduced invasiveness. The choice between them depends on burn depth, extent, patient condition, and treatment goals, with their strategic use often enhancing overall outcomes. 3.3.1. Skin Autografts Skin autografts are divided into split-thickness and full-thickness grafts, depending on the harvested skin layers. The use of autografts becomes more challenging as burn wounds increase in size. Full-thickness grafts, containing both epidermis and dermis, provide superior elasticity, flexibility, and texture, offering better aesthetic results and scarring. However, their viability may be compromised due to the dermal thickness during revascularization, and the donor site often requires primary closure due to dermis removal, with limited anatomical donor areas (e.g., supraclavicular, retroauricular, inguinal). Full-thickness grafts are typically used for facial and hand defects . For larger burn areas, split-thickness skin grafts, which include the epidermis and part of the dermis, are more common. The donor area can be reused after healing, though repeated harvests may lead to hypopigmentation and increased contracture risk. For small lesions, unexpanded grafts yield better cosmetic outcomes but carry risks of seromas or hematomas . Instruments like manual dermatomes and modern rotary dermatomes are commonly used for harvesting STSGs. Rotary dermatomes are especially helpful in treating large burns because they offer speed and precision, while manual instruments like the Weck guarded Goulian knives and Watson knives are great for smaller or delicate areas, providing excellent control for harvesting thin skin layers or removing dead tissue . Expanding grafts is often necessary for large defects. A 2:1 expansion facilitates application and drainage but yields less aesthetic results due to perforations. Larger expansions, such as 4:1, cover extensive areas, often necessitating a sandwich technique with an overlaid allograft. For critical burn cases, even higher ratios may be used, though these are unsuitable for visible areas like the face and hands, where unexpanded grafts are preferable . The Meek micrografting technique divides a graft into small square fragments, achieving up to 9:1 expansion for coverage of extensive areas (up to 75% of total skin surface), with better viability due to lower metabolic demands and fewer interventions needed. Donor sites should ideally match the recipient area in pigmentation, with common donor areas including the thigh, hip, and gluteal region for easier harvesting and scar concealment . Traditional meshing devices create slits in the skin graft, allowing it to stretch and cover a larger area. However, Meek grafting offers greater uniformity and can achieve higher expansion ratios, making it particularly effective for extensive burns. These advancements in tools and techniques highlight the innovative methods being used in modern burn and wound care . 3.3.2. Temporary Skin Substitutes Allografts, derived from cadaveric skin, are biologic dressings that only temporarily cover the wound before it is ready for closure. They can revascularize when placed on an excised burn wound but are eventually rejected after 3–4 weeks due to their non-self-antigenic properties. They serve as temporary coverage without needing immunosuppressants and prevent the desiccation of tissues. Moreover, allografts help the surgeon to evaluate the suitability for later autografting . Xenografts, primarily porcine and bovine xenografts, provide temporary wound coverage, reduce pain, and protect against infection, though they do not vascularize, but only adhere to the wound bed. Thus, they are mainly used for partial-thickness burns and donor sites . Recently, newer fish-based xenografts have caught more attention because of their capacity to offer anti-inflammatory benefits and higher porosity, supporting fibroblast activity and resisting bacterial invasion for short periods . The human amniotic membrane (HAM) is a valuable option for temporary wound coverage, especially for burns and trauma on irregular surfaces like the face. Its multi-layered structure consists of an epithelium, which contains pluripotent stem cells, a basal membrane rich in extracellular matrix proteins like collagen and fibronectin, and additional stromal layers containing regenerative molecules. HAM provides essential wound healing benefits, including immunomodulatory effects, anti-scarring, antimicrobial properties, and tissue regeneration. Its flexibility makes it suitable for one-time application, potentially reducing pain and increasing comfort for patients . Despite these advantages, it presents challenges due to storage requirements, immediate availability, and risks of disease transmission. To address these, human acellular amniotic membrane (HAAM) was developed, utilizing infrared and microwave irradiation drying and gamma-ray sterilization, making it easier to store and handle while removing viable cells to minimize immune response . Synthetic skin substitutes (see ) provide sterile, consistent wound coverage, absorbing exudates and preventing infection when carefully monitored, but also maintain an appropriately moist environment for healing. They can be polyurethane-based dressings, silicone-based dressings, hydrocolloid dressings, hydrofiber dressings, alginate dressings or based on other synthetic materials (polylactic acid membranes or polyester mesh impregnated with petroleum jelly, etc.) . 3.3.3. Permanent Skin Substitutes Ideally, the best solution for covering burn wounds would be a product that mimics skin properties while remaining integrated during healing. Most of the available substitutes mainly mimic dermal elasticity and require subsequent split-thickness skin grafting . Acellular Dermal Matrices (ADMs) are scaffolds that promote tissue regeneration by providing a structure for cell growth and integration. Used in both acute and reconstructive burn care, ADMs are valuable for cases where autografts are not feasible, especially in full-thickness burns. Combined with thin split-thickness grafts, ADMs have shown improved functional and cosmetic results, reduced scar contracture, and enhanced graft integration . AlloDerm ® is an allogenic dermis derived from decellularized and de-epithelialized human cadaver skin, by undergoing freeze-drying to eliminate antigenicity and prevent rejection. Since it lacks an epidermal barrier, it must be paired with a thin epithelial autograft when applied on the wound bed . Integra ® consists of a temporary silicone epidermal layer and a bovine collagen–chondroitin matrix for permanent dermal replacement, allowing fibroblast infiltration and revascularization. After 2–3 weeks, once the dermal matrix is integrated, the silicone layer can be replaced with an autograft. The epidermal layer serves as a barrier against infection and dehydration, while the dermal layer promotes the synthesis of a new dermal layer. In severe burns, Integra ® can integrate over poorly vascularized areas like tendons, where skin grafts would not adhere. Histological studies show Integra ® resembles natural skin closely, with better elasticity . While Integra ® requires a two-step process, other dermal substitutes allow for a single-step procedure. Matriderm ® , the first to do this, consists of a freeze-dried scaffold of bovine collagen and elastin, applied directly to the wound bed or after wetting with saline. Clinical studies note that graft integration can be slower, likely due to the interposition of the scaffold between the graft and the wound bed, but, on the other hand, scar quality proves to be superior . Novosorb ® Biodegradable Temporizing Matrix is a synthetic, acellular dermal substitute used in the management of burns, trauma wounds, and other complex skin defects. Unlike biologic substitutes like AlloDerm and Integra, BTM is fully synthetic, composed of a porous polyurethane foam matrix topped with a temporary sealing layer. The matrix serves as a scaffold for cellular infiltration and vascularization, promoting dermal regeneration, while the sealing layer protects the wound during the healing process. Once the new dermal layer forms, the sealing layer is removed, and a split-thickness skin graft is applied. BTM offers advantages such as a long shelf life, resistance to immunogenic reactions, and ease of use, making it a valuable tool in modern burn care and reconstructive surgery . An ideal skin replacement method would replace both dermal and epidermal layers simultaneously. A logical approach involves combining epithelial cells with a dermal scaffold in the lab. Several research teams have developed composite skin substitutes, such as Orcel ® , a bilayered substitute made of fibroblasts and keratinocytes, but its allogenic cells do not survive long in the wound bed . Similarly, Apligraf ® is made of cultured fibroblasts in a bovine collagen matrix, topped with cultured keratinocytes. But since these are human foreskin-derived neonatal cells, which are allogenic, they will eventually be rejected, limiting Apligraf ® ’s use to chronic wound healing . Recent studies have also explored full skin substitutes using autologous cells. While promising, these techniques face significant challenges, such as lengthy culture times, high costs, and the potential absence of pigment cells. Regulatory hurdles further complicate their application, limiting their practical use in clinical settings . 3.3.4. Cell-Based Therapies Recent advancements in burn wound management highlight the effectiveness of integrating cellular therapies and dermal substitutes to enhance healing and improve aesthetic outcomes. Studies have underscored the importance of cultured epidermal cells in burn treatment. Cultured epithelial autografts allow for the rapid expansion of a patient’s skin cells, offering a valuable solution for patients with extensive burns. This technique has several disadvantages, such as the fragility of keratinocyte layers, which makes them challenging to handle, high graft failure rates on areas like the lower back and buttocks, extended hospitalization, susceptibility to infection, and higher costs . Since keratinocytes alone do not replace the dermis, additional methods must be associated. Studies show variability in the survival of cultured keratinocyte grafts, with improved outcomes when autografts are combined with cultivated grafts—achieving graft adherence rates between 73% and 96% . Epicel ® , a cultured epidermal autograft (CEA), is produced from human keratinocytes grown on a fibrin mesh scaffold. Despite its effectiveness, Epicel ® ’s use is restricted to extensive full-thickness burns covering over 30% of the body surface area due to its high costs, fragile nature, infection susceptibility, and demanding postoperative care requirements. Typically, it is combined with a permanent dermal substitute, although its practicality for dermatological applications is often limited by these same challenges . A more recent alternative is a suspension of autologous keratinocytes, an FDA-approved device (ReCell ® ). This allows for the creation of a solution of the patient’s skin cells, harvested from a small skin biopsy, which is then sprayed over the burn wound. This method promotes epithelialization and accelerates wound closure without the need for extensive skin grafts. A biopsy of up to 4 cm 2 can be turned into enough sprayable product to treat an area of 320 cm 2 . Clinical trials have shown that this approach significantly reduces healing time, improves pigmentation, and decreases scarring compared to traditional skin grafts . Adipose-derived stem cells (ADSCs), which have a similar differentiation capacity to mesenchymal stem cells, applied on top of an ADM significantly accelerate post-burn wound reconstruction. ADSCs promote faster revascularization and cellular integration within the dermal matrix, yielding smoother skin texture and improved elasticity in treated areas, making it a promising approach for complex burn injuries. Adipose-derived stem cells (ADSCs) are particularly promising due to their availability from accessible tissue sources, through minimally invasive liposuction, or directly from the eschar. This ease of access and regenerative cell density makes ADSCs advantageous for wound healing and tissue repair applications . While challenges remain, such as prolonged preparation times, these approaches are becoming integral in managing severe burns, particularly for cases where donor skin is limited. These cellular therapies continue to shape the future of reconstructive burn care by enabling personalized and biologically compatible treatments. 3.3.5. Negative Pressure Wound Therapy Negative pressure wound therapy (NPWT), also known as vacuum-assisted closure (VAC), accelerates wound healing by promoting vascularization, granulation, and faster re-epithelialization, while also reducing bacterial infections and edema in wounds. NPWT has been studied as a dressing for acute burn care, an intermediate treatment before grafting, an adjunct for autografts, a support for dermal substitutes, and a dressing for donor sites . A study by Gümüş et al. demonstrated the efficacy of NPWT in managing high-voltage electrical burns, where traditional treatments often fail—an observation which was supported by subsequent studies from other centers . Studies show that NPWT applied on autografts can lead to graft integration rates of up to 97% . When used with ADMs, NPWT significantly improves outcomes, achieving a 96% success rate in dermal substitute integration with revascularization within one week, compared to the usual 2–3 weeks . showcases a summary of the therapeutic management of burn wounds throughout the three phases of evolution, each with its own specific particularities. It is important not to forget about pain management in this whole process, with drugs such as ibuprofen, acetaminophen or even opioids oftentimes being necessary. Skin autografts are divided into split-thickness and full-thickness grafts, depending on the harvested skin layers. The use of autografts becomes more challenging as burn wounds increase in size. Full-thickness grafts, containing both epidermis and dermis, provide superior elasticity, flexibility, and texture, offering better aesthetic results and scarring. However, their viability may be compromised due to the dermal thickness during revascularization, and the donor site often requires primary closure due to dermis removal, with limited anatomical donor areas (e.g., supraclavicular, retroauricular, inguinal). Full-thickness grafts are typically used for facial and hand defects . For larger burn areas, split-thickness skin grafts, which include the epidermis and part of the dermis, are more common. The donor area can be reused after healing, though repeated harvests may lead to hypopigmentation and increased contracture risk. For small lesions, unexpanded grafts yield better cosmetic outcomes but carry risks of seromas or hematomas . Instruments like manual dermatomes and modern rotary dermatomes are commonly used for harvesting STSGs. Rotary dermatomes are especially helpful in treating large burns because they offer speed and precision, while manual instruments like the Weck guarded Goulian knives and Watson knives are great for smaller or delicate areas, providing excellent control for harvesting thin skin layers or removing dead tissue . Expanding grafts is often necessary for large defects. A 2:1 expansion facilitates application and drainage but yields less aesthetic results due to perforations. Larger expansions, such as 4:1, cover extensive areas, often necessitating a sandwich technique with an overlaid allograft. For critical burn cases, even higher ratios may be used, though these are unsuitable for visible areas like the face and hands, where unexpanded grafts are preferable . The Meek micrografting technique divides a graft into small square fragments, achieving up to 9:1 expansion for coverage of extensive areas (up to 75% of total skin surface), with better viability due to lower metabolic demands and fewer interventions needed. Donor sites should ideally match the recipient area in pigmentation, with common donor areas including the thigh, hip, and gluteal region for easier harvesting and scar concealment . Traditional meshing devices create slits in the skin graft, allowing it to stretch and cover a larger area. However, Meek grafting offers greater uniformity and can achieve higher expansion ratios, making it particularly effective for extensive burns. These advancements in tools and techniques highlight the innovative methods being used in modern burn and wound care . Allografts, derived from cadaveric skin, are biologic dressings that only temporarily cover the wound before it is ready for closure. They can revascularize when placed on an excised burn wound but are eventually rejected after 3–4 weeks due to their non-self-antigenic properties. They serve as temporary coverage without needing immunosuppressants and prevent the desiccation of tissues. Moreover, allografts help the surgeon to evaluate the suitability for later autografting . Xenografts, primarily porcine and bovine xenografts, provide temporary wound coverage, reduce pain, and protect against infection, though they do not vascularize, but only adhere to the wound bed. Thus, they are mainly used for partial-thickness burns and donor sites . Recently, newer fish-based xenografts have caught more attention because of their capacity to offer anti-inflammatory benefits and higher porosity, supporting fibroblast activity and resisting bacterial invasion for short periods . The human amniotic membrane (HAM) is a valuable option for temporary wound coverage, especially for burns and trauma on irregular surfaces like the face. Its multi-layered structure consists of an epithelium, which contains pluripotent stem cells, a basal membrane rich in extracellular matrix proteins like collagen and fibronectin, and additional stromal layers containing regenerative molecules. HAM provides essential wound healing benefits, including immunomodulatory effects, anti-scarring, antimicrobial properties, and tissue regeneration. Its flexibility makes it suitable for one-time application, potentially reducing pain and increasing comfort for patients . Despite these advantages, it presents challenges due to storage requirements, immediate availability, and risks of disease transmission. To address these, human acellular amniotic membrane (HAAM) was developed, utilizing infrared and microwave irradiation drying and gamma-ray sterilization, making it easier to store and handle while removing viable cells to minimize immune response . Synthetic skin substitutes (see ) provide sterile, consistent wound coverage, absorbing exudates and preventing infection when carefully monitored, but also maintain an appropriately moist environment for healing. They can be polyurethane-based dressings, silicone-based dressings, hydrocolloid dressings, hydrofiber dressings, alginate dressings or based on other synthetic materials (polylactic acid membranes or polyester mesh impregnated with petroleum jelly, etc.) . Ideally, the best solution for covering burn wounds would be a product that mimics skin properties while remaining integrated during healing. Most of the available substitutes mainly mimic dermal elasticity and require subsequent split-thickness skin grafting . Acellular Dermal Matrices (ADMs) are scaffolds that promote tissue regeneration by providing a structure for cell growth and integration. Used in both acute and reconstructive burn care, ADMs are valuable for cases where autografts are not feasible, especially in full-thickness burns. Combined with thin split-thickness grafts, ADMs have shown improved functional and cosmetic results, reduced scar contracture, and enhanced graft integration . AlloDerm ® is an allogenic dermis derived from decellularized and de-epithelialized human cadaver skin, by undergoing freeze-drying to eliminate antigenicity and prevent rejection. Since it lacks an epidermal barrier, it must be paired with a thin epithelial autograft when applied on the wound bed . Integra ® consists of a temporary silicone epidermal layer and a bovine collagen–chondroitin matrix for permanent dermal replacement, allowing fibroblast infiltration and revascularization. After 2–3 weeks, once the dermal matrix is integrated, the silicone layer can be replaced with an autograft. The epidermal layer serves as a barrier against infection and dehydration, while the dermal layer promotes the synthesis of a new dermal layer. In severe burns, Integra ® can integrate over poorly vascularized areas like tendons, where skin grafts would not adhere. Histological studies show Integra ® resembles natural skin closely, with better elasticity . While Integra ® requires a two-step process, other dermal substitutes allow for a single-step procedure. Matriderm ® , the first to do this, consists of a freeze-dried scaffold of bovine collagen and elastin, applied directly to the wound bed or after wetting with saline. Clinical studies note that graft integration can be slower, likely due to the interposition of the scaffold between the graft and the wound bed, but, on the other hand, scar quality proves to be superior . Novosorb ® Biodegradable Temporizing Matrix is a synthetic, acellular dermal substitute used in the management of burns, trauma wounds, and other complex skin defects. Unlike biologic substitutes like AlloDerm and Integra, BTM is fully synthetic, composed of a porous polyurethane foam matrix topped with a temporary sealing layer. The matrix serves as a scaffold for cellular infiltration and vascularization, promoting dermal regeneration, while the sealing layer protects the wound during the healing process. Once the new dermal layer forms, the sealing layer is removed, and a split-thickness skin graft is applied. BTM offers advantages such as a long shelf life, resistance to immunogenic reactions, and ease of use, making it a valuable tool in modern burn care and reconstructive surgery . An ideal skin replacement method would replace both dermal and epidermal layers simultaneously. A logical approach involves combining epithelial cells with a dermal scaffold in the lab. Several research teams have developed composite skin substitutes, such as Orcel ® , a bilayered substitute made of fibroblasts and keratinocytes, but its allogenic cells do not survive long in the wound bed . Similarly, Apligraf ® is made of cultured fibroblasts in a bovine collagen matrix, topped with cultured keratinocytes. But since these are human foreskin-derived neonatal cells, which are allogenic, they will eventually be rejected, limiting Apligraf ® ’s use to chronic wound healing . Recent studies have also explored full skin substitutes using autologous cells. While promising, these techniques face significant challenges, such as lengthy culture times, high costs, and the potential absence of pigment cells. Regulatory hurdles further complicate their application, limiting their practical use in clinical settings . Recent advancements in burn wound management highlight the effectiveness of integrating cellular therapies and dermal substitutes to enhance healing and improve aesthetic outcomes. Studies have underscored the importance of cultured epidermal cells in burn treatment. Cultured epithelial autografts allow for the rapid expansion of a patient’s skin cells, offering a valuable solution for patients with extensive burns. This technique has several disadvantages, such as the fragility of keratinocyte layers, which makes them challenging to handle, high graft failure rates on areas like the lower back and buttocks, extended hospitalization, susceptibility to infection, and higher costs . Since keratinocytes alone do not replace the dermis, additional methods must be associated. Studies show variability in the survival of cultured keratinocyte grafts, with improved outcomes when autografts are combined with cultivated grafts—achieving graft adherence rates between 73% and 96% . Epicel ® , a cultured epidermal autograft (CEA), is produced from human keratinocytes grown on a fibrin mesh scaffold. Despite its effectiveness, Epicel ® ’s use is restricted to extensive full-thickness burns covering over 30% of the body surface area due to its high costs, fragile nature, infection susceptibility, and demanding postoperative care requirements. Typically, it is combined with a permanent dermal substitute, although its practicality for dermatological applications is often limited by these same challenges . A more recent alternative is a suspension of autologous keratinocytes, an FDA-approved device (ReCell ® ). This allows for the creation of a solution of the patient’s skin cells, harvested from a small skin biopsy, which is then sprayed over the burn wound. This method promotes epithelialization and accelerates wound closure without the need for extensive skin grafts. A biopsy of up to 4 cm 2 can be turned into enough sprayable product to treat an area of 320 cm 2 . Clinical trials have shown that this approach significantly reduces healing time, improves pigmentation, and decreases scarring compared to traditional skin grafts . Adipose-derived stem cells (ADSCs), which have a similar differentiation capacity to mesenchymal stem cells, applied on top of an ADM significantly accelerate post-burn wound reconstruction. ADSCs promote faster revascularization and cellular integration within the dermal matrix, yielding smoother skin texture and improved elasticity in treated areas, making it a promising approach for complex burn injuries. Adipose-derived stem cells (ADSCs) are particularly promising due to their availability from accessible tissue sources, through minimally invasive liposuction, or directly from the eschar. This ease of access and regenerative cell density makes ADSCs advantageous for wound healing and tissue repair applications . While challenges remain, such as prolonged preparation times, these approaches are becoming integral in managing severe burns, particularly for cases where donor skin is limited. These cellular therapies continue to shape the future of reconstructive burn care by enabling personalized and biologically compatible treatments. Negative pressure wound therapy (NPWT), also known as vacuum-assisted closure (VAC), accelerates wound healing by promoting vascularization, granulation, and faster re-epithelialization, while also reducing bacterial infections and edema in wounds. NPWT has been studied as a dressing for acute burn care, an intermediate treatment before grafting, an adjunct for autografts, a support for dermal substitutes, and a dressing for donor sites . A study by Gümüş et al. demonstrated the efficacy of NPWT in managing high-voltage electrical burns, where traditional treatments often fail—an observation which was supported by subsequent studies from other centers . Studies show that NPWT applied on autografts can lead to graft integration rates of up to 97% . When used with ADMs, NPWT significantly improves outcomes, achieving a 96% success rate in dermal substitute integration with revascularization within one week, compared to the usual 2–3 weeks . showcases a summary of the therapeutic management of burn wounds throughout the three phases of evolution, each with its own specific particularities. It is important not to forget about pain management in this whole process, with drugs such as ibuprofen, acetaminophen or even opioids oftentimes being necessary. Recent advancements in therapeutic approaches have significantly expanded the possibilities for reconstructive surgery, particularly for patients with complex soft tissue injuries or defects. Tissue bioengineering has emerged as a groundbreaking field, enabling the creation of customized solutions for regenerating and restoring soft tissues. Furthermore, the development of vascularized composite allotransplantation (VCA) represents a major leap forward, allowing for the transplantation of composite tissues—such as skin, muscle, bone, and nerves—in a single, integrated graft. This approach offers patients the possibility of full-scale tissue restoration, even in the most intricate and challenging cases. Moreover, the incorporation of nanotechnology into reconstructive medicine holds immense promise, with nanomaterials and nanoparticles being explored for their ability to enhance wound healing, improve tissue regeneration, and provide targeted drug delivery. Introducing these cutting-edge strategies into clinical practice is essential for providing comprehensive care that optimizes outcomes in terms of structural integrity, function, and aesthetics. By leveraging these innovations, healthcare providers can deliver more effective treatments, which not only restore patients’ physical appearance but also significantly enhance their quality of life. The ultimate goal is to help patients regain their pre-injury functionality as quickly as possible, supporting both their social reintegration and professional rehabilitation. These advancements mark a shift toward more personalized and precise reconstructive care, enabling patients to not only recover their health but also regain their confidence, independence, and sense of self . 4.1. Bioprinting Technologies Current skin substitutes have certain disadvantages, among which is the incapacity to stimulate the regeneration of vessels, nerves, sweat glands and sebaceous glands, hair follicles, or pigmentation. Bioprinting can innovate the field by generating complex skin constructs that include all natural skin structures, significantly improving the outcome of burn patients . Bioprinting technology represents a groundbreaking advancement in burn wound therapy. This technique involves the layer-by-layer deposition of living cells with hydrogel-based scaffolds (“bioink”) to create complex tissues, such as skin, vascular grafts, heart tissue, or cartilaginous structures. Bioprinting allows for the fabrication of skin grafts tailored to specific defects, which may reduce dependence on donor sites and enhance graft integration . Tridimensional bioprinting consists of five steps: scanning the target tissue, developing a model based on the imaging data using computer-aided design and manufacturing software, selecting appropriate biomaterial scaffolds and cell types, printing the tissue using a bioprinter, and, lastly, allowing the printed tissue to mature . Bioprinting has several advantages for developing skin constructs over conventional tissue engineering methods, such as automation, standardization for clinical scenarios, and precise cell placement. While traditional tissue engineering approaches, such as culturing cells on a scaffold and maturing them in a bioreactor, have comparable results to bioprinting, there are still challenges to address in producing skin constructs. Notably, conventional methods often involve lengthy production times, especially when needed for extensive burns . There are two main bioprinting approaches: in situ and in vitro. In situ bioprinting involves the precise deposition of cells on the wound bed, enabling skin maturation directly at the wound site, without the need for the costly and time-consuming in vitro differentiation or multiple surgeries . On the other hand, in vitro bioprinting involves skin maturation in a bioreactor and then transplanting the construct onto the wound site. Studies have shown the effectiveness of both methods, with in situ bioprinting showing promising results in wound healing and skin regeneration, including successful trials in animal models . Recent studies have shown that bioprinted skin constructs can replicate the multi-layered structure of human skin, including both dermal and epidermal components. The epidermis is thin and can potentially be bioprinted using laser-assisted technology to replicate its morphology and pigmentation. The basement membrane is composed of fibrous tissue and ECM components, but bioprinting it is challenging, so researchers often rely on tissue self-assembly. The dermis, found beneath the basement membrane, contains fibroblasts, ECM, skin appendages, blood vessels, and nerves. This layer’s elasticity and strength come from collagen and elastin fibers. Extrusion-based bioprinting could be effective for this layer, allowing the inclusion of various cell types like hair follicles and glands. The hypodermis, below the dermis, consists of adipose tissue that provides insulation and padding, and restoring it through autologous fat injection has shown benefits in burn recovery by improving scar pliability. This technique has the potential to produce skin grafts with improved vascularization, reducing the likelihood of graft failure. Moreover, there are studies looking into the possibility of incorporating other structures into the skin constructs, such as melanocytes, hair follicles, endothelial cells, sweat glands, and sebaceous glands . To advance the clinical use of bioprinting for skin, several technological challenges must be addressed. First, large quantities of cells are needed for transplant-ready skin, and current cell expansion methods only support millions, not billions of cells, necessitating innovation in cell expansion technologies. Additionally, improvements in bioinks are required to ensure consistent bioprinting with appropriate biomechanical properties. Enhanced printing resolution is crucial to replicate the microarchitecture of skin, and control over the microarchitecture will be vital for functional tissue. Increasing printing speed without compromising cell viability is another challenge. Building a functional vasculature is also essential for tissue perfusion, and methods using sacrificial inks or simultaneous printing of vasculature and surrounding cells show promise. Additionally, the development of standard growth media, ECM-based bioinks, and dynamic bioreactors will improve tissue maturation. Future efforts should focus on refining computational and analytical approaches to model and optimize bioprinting parameters and tissue development . 4.2. Wireless Microcurrent Stimulation Wireless Microcurrent Stimulation (WMCS) is an emerging technique that uses electrical currents to stimulate cell activity, including DNA synthesis, cell migration, and collagen production. This non-invasive therapy, which delivers a current to the burn wound without direct contact, generates a significant number of oxygen molecules, which can emit electrons at the time of contact with the wound . This repetitive interaction generates a constant low-intensity electric current, improving blood flow in burn wounds without any systemic adverse effects or impairing wound healing. These electric currents mimic the natural current that the body generates for tissue repair . WMCS enhances wound healing, especially by reducing the zone of stasis, and reduces pain in burn patients . An important aspect that needs to be considered is the compatibility of this technique with specific patient characteristics, since those with cardiac pacemakers or metal implants cannot be treated with WMCS. A study comparing NPWT and WMCS proved that WMCS has superior results when considering the reduction of the burn wound area, while NPWT is better for the reduction of bacterial growth, although other studies suggest that WMCS has bacteriostatic effects as well . 4.3. Extracorporeal Shock Wave Therapy Extracorporeal Shock Wave Therapy (ESWT) has been explored as a novel approach to enhance burn wound healing, due to its safety, efficiency, non-invasiveness, and lower costs. It uses electromagnetic shock wave energy with several biologic effects, such as increased cellular activity and tissular regeneration, increased blood flow, and reduced inflammation . Studies show that ESWT may increase wound perfusion and reduce the need for grafting in deep partial-thickness and full-thickness burns . When used for superficial partial-thickness burns, ESWT has shown significantly faster re-epithelization times after debridement or topical therapy, in a prospective randomized phase II trial . Similarly, the donor site for the split-thickness partial graft can be managed with a single round of ESWT immediately after harvest, significantly accelerating donor site healing . Several studies investigated the effects of ESWT on pain, pruritus, quality of life, burn scars, and their microbiome, with significantly reduced scar pain, positive influences on the skin’s supportive microbiome, and improved healing outcomes in burn patients . 4.4. Nanotechnology and Nanomedicine Nanotechnology is rapidly advancing in the field of burn wound care, offering new methods for preventing infection and promoting healing . Nanoparticles are being investigated not only for regeneration and repair, but also for their ability to deliver therapeutic agents directly to the wound site, improving the efficacy of treatment while minimizing systemic side effects. Early research indicates that nanotechnology can enhance collagen deposition, hair follicle regeneration, and overall wound healing . Organic nanostructures include polymeric nanoparticles, nanoemulsions, nanogels, liposomes, solid lipid-based nanoparticles, etc. On the other hand, nanostructures can also be inorganic, based on nanocarbons (such as carbon nanotubes, graphene, or nanodiamonds), gold, copper, silver, titanium dioxide, magnetic nanoparticles, or quantum dots . These act as delivery methods for different agents that need to be locally administered on the wound, such as antibiotics, growth factors, specific genes, or even stem cells, preventing wound infection whilst stimulating tissue regeneration . Nanotechnology can also be used to create nanoengineered scaffolds based on nanoparticles and nanofibers in order to provide enhanced wound protection, achieve lower wound infections, and accelerate healing through increased cell attachment . However, the potential toxicity of metal-based nanoparticles remains a significant concern. Studies indicate that certain nanoparticles, such as silver and titanium dioxide, may generate reactive oxygen species or accumulate in tissues, causing cytotoxicity, oxidative stress, or inflammatory responses. These effects can hinder wound healing and pose risks for long-term systemic exposure. Addressing this challenge requires optimizing nanoparticle formulations to balance therapeutic benefits with biocompatibility. Strategies such as surface functionalization or incorporating biodegradable coatings are under investigation to mitigate toxicity while maintaining efficacy . 4.5. Vascularized Composite Allotransplantation (VCA) Vascularized composite allografts have been carried out for life-enhancing indications in a carefully chosen group of patients, in accordance with specific institutional protocols. After two decades of clinical research on vascularized composite grafts, the primary limitation remains the need for lifelong immunosuppression, even with standardized indications and surgical protocols, for what is ultimately a non-vital procedure. This prolonged immunosuppressive treatment is associated with a range of adverse reactions. At present, the focus of composite tissue transplant research lies in the field of immunology . VCA has been successfully used for burn reconstruction since 2003, providing a solution for patients with severe deformities. Traditional reconstructive methods following extensive facial burns often result in suboptimal functional and aesthetic outcomes. The reconstruction of burn injuries affecting central facial units or uniquely intricate structures, such as the eyelids, remains particularly challenging . However, its safety in this population has been a concern due to the high levels of alloimmunization resulting from tissue exposure during burn care. Pre-VCA factors, such as multiple transfusions, prior transplants, pregnancies, the use of skin allografts, and mechanical support devices, can lead to sensitization. This may ultimately exclude burn patients—those who could benefit the most from VCA—from receiving hand or face transplants . Vascularized composite allotransplantation (VCA) for burn reconstruction has been associated with an increased risk of 1-year mortality and an increased number of episodes of acute rejection compared to other VCA-candidate patients. Future research should aim to identify the unique risk factors specific to burn patients undergoing VCA and explore the relationship between antigenic burdens and surgical outcomes. Although burn patients currently represent a minority of VCA recipients, they constitute the largest demographic on waiting lists due to the high burden of donor-specific antibodies, which complicates donor matching . The current goal is to reduce immunosuppression to a level where both the functionality of the allograft is preserved, and adverse immune reactions are minimized. The long-term objective is to achieve donor-specific immune tolerance, where the immune system no longer reacts to the donor’s antigens, eliminating the need for ongoing immunosuppressant therapy . Current skin substitutes have certain disadvantages, among which is the incapacity to stimulate the regeneration of vessels, nerves, sweat glands and sebaceous glands, hair follicles, or pigmentation. Bioprinting can innovate the field by generating complex skin constructs that include all natural skin structures, significantly improving the outcome of burn patients . Bioprinting technology represents a groundbreaking advancement in burn wound therapy. This technique involves the layer-by-layer deposition of living cells with hydrogel-based scaffolds (“bioink”) to create complex tissues, such as skin, vascular grafts, heart tissue, or cartilaginous structures. Bioprinting allows for the fabrication of skin grafts tailored to specific defects, which may reduce dependence on donor sites and enhance graft integration . Tridimensional bioprinting consists of five steps: scanning the target tissue, developing a model based on the imaging data using computer-aided design and manufacturing software, selecting appropriate biomaterial scaffolds and cell types, printing the tissue using a bioprinter, and, lastly, allowing the printed tissue to mature . Bioprinting has several advantages for developing skin constructs over conventional tissue engineering methods, such as automation, standardization for clinical scenarios, and precise cell placement. While traditional tissue engineering approaches, such as culturing cells on a scaffold and maturing them in a bioreactor, have comparable results to bioprinting, there are still challenges to address in producing skin constructs. Notably, conventional methods often involve lengthy production times, especially when needed for extensive burns . There are two main bioprinting approaches: in situ and in vitro. In situ bioprinting involves the precise deposition of cells on the wound bed, enabling skin maturation directly at the wound site, without the need for the costly and time-consuming in vitro differentiation or multiple surgeries . On the other hand, in vitro bioprinting involves skin maturation in a bioreactor and then transplanting the construct onto the wound site. Studies have shown the effectiveness of both methods, with in situ bioprinting showing promising results in wound healing and skin regeneration, including successful trials in animal models . Recent studies have shown that bioprinted skin constructs can replicate the multi-layered structure of human skin, including both dermal and epidermal components. The epidermis is thin and can potentially be bioprinted using laser-assisted technology to replicate its morphology and pigmentation. The basement membrane is composed of fibrous tissue and ECM components, but bioprinting it is challenging, so researchers often rely on tissue self-assembly. The dermis, found beneath the basement membrane, contains fibroblasts, ECM, skin appendages, blood vessels, and nerves. This layer’s elasticity and strength come from collagen and elastin fibers. Extrusion-based bioprinting could be effective for this layer, allowing the inclusion of various cell types like hair follicles and glands. The hypodermis, below the dermis, consists of adipose tissue that provides insulation and padding, and restoring it through autologous fat injection has shown benefits in burn recovery by improving scar pliability. This technique has the potential to produce skin grafts with improved vascularization, reducing the likelihood of graft failure. Moreover, there are studies looking into the possibility of incorporating other structures into the skin constructs, such as melanocytes, hair follicles, endothelial cells, sweat glands, and sebaceous glands . To advance the clinical use of bioprinting for skin, several technological challenges must be addressed. First, large quantities of cells are needed for transplant-ready skin, and current cell expansion methods only support millions, not billions of cells, necessitating innovation in cell expansion technologies. Additionally, improvements in bioinks are required to ensure consistent bioprinting with appropriate biomechanical properties. Enhanced printing resolution is crucial to replicate the microarchitecture of skin, and control over the microarchitecture will be vital for functional tissue. Increasing printing speed without compromising cell viability is another challenge. Building a functional vasculature is also essential for tissue perfusion, and methods using sacrificial inks or simultaneous printing of vasculature and surrounding cells show promise. Additionally, the development of standard growth media, ECM-based bioinks, and dynamic bioreactors will improve tissue maturation. Future efforts should focus on refining computational and analytical approaches to model and optimize bioprinting parameters and tissue development . Wireless Microcurrent Stimulation (WMCS) is an emerging technique that uses electrical currents to stimulate cell activity, including DNA synthesis, cell migration, and collagen production. This non-invasive therapy, which delivers a current to the burn wound without direct contact, generates a significant number of oxygen molecules, which can emit electrons at the time of contact with the wound . This repetitive interaction generates a constant low-intensity electric current, improving blood flow in burn wounds without any systemic adverse effects or impairing wound healing. These electric currents mimic the natural current that the body generates for tissue repair . WMCS enhances wound healing, especially by reducing the zone of stasis, and reduces pain in burn patients . An important aspect that needs to be considered is the compatibility of this technique with specific patient characteristics, since those with cardiac pacemakers or metal implants cannot be treated with WMCS. A study comparing NPWT and WMCS proved that WMCS has superior results when considering the reduction of the burn wound area, while NPWT is better for the reduction of bacterial growth, although other studies suggest that WMCS has bacteriostatic effects as well . Extracorporeal Shock Wave Therapy (ESWT) has been explored as a novel approach to enhance burn wound healing, due to its safety, efficiency, non-invasiveness, and lower costs. It uses electromagnetic shock wave energy with several biologic effects, such as increased cellular activity and tissular regeneration, increased blood flow, and reduced inflammation . Studies show that ESWT may increase wound perfusion and reduce the need for grafting in deep partial-thickness and full-thickness burns . When used for superficial partial-thickness burns, ESWT has shown significantly faster re-epithelization times after debridement or topical therapy, in a prospective randomized phase II trial . Similarly, the donor site for the split-thickness partial graft can be managed with a single round of ESWT immediately after harvest, significantly accelerating donor site healing . Several studies investigated the effects of ESWT on pain, pruritus, quality of life, burn scars, and their microbiome, with significantly reduced scar pain, positive influences on the skin’s supportive microbiome, and improved healing outcomes in burn patients . Nanotechnology is rapidly advancing in the field of burn wound care, offering new methods for preventing infection and promoting healing . Nanoparticles are being investigated not only for regeneration and repair, but also for their ability to deliver therapeutic agents directly to the wound site, improving the efficacy of treatment while minimizing systemic side effects. Early research indicates that nanotechnology can enhance collagen deposition, hair follicle regeneration, and overall wound healing . Organic nanostructures include polymeric nanoparticles, nanoemulsions, nanogels, liposomes, solid lipid-based nanoparticles, etc. On the other hand, nanostructures can also be inorganic, based on nanocarbons (such as carbon nanotubes, graphene, or nanodiamonds), gold, copper, silver, titanium dioxide, magnetic nanoparticles, or quantum dots . These act as delivery methods for different agents that need to be locally administered on the wound, such as antibiotics, growth factors, specific genes, or even stem cells, preventing wound infection whilst stimulating tissue regeneration . Nanotechnology can also be used to create nanoengineered scaffolds based on nanoparticles and nanofibers in order to provide enhanced wound protection, achieve lower wound infections, and accelerate healing through increased cell attachment . However, the potential toxicity of metal-based nanoparticles remains a significant concern. Studies indicate that certain nanoparticles, such as silver and titanium dioxide, may generate reactive oxygen species or accumulate in tissues, causing cytotoxicity, oxidative stress, or inflammatory responses. These effects can hinder wound healing and pose risks for long-term systemic exposure. Addressing this challenge requires optimizing nanoparticle formulations to balance therapeutic benefits with biocompatibility. Strategies such as surface functionalization or incorporating biodegradable coatings are under investigation to mitigate toxicity while maintaining efficacy . Vascularized composite allografts have been carried out for life-enhancing indications in a carefully chosen group of patients, in accordance with specific institutional protocols. After two decades of clinical research on vascularized composite grafts, the primary limitation remains the need for lifelong immunosuppression, even with standardized indications and surgical protocols, for what is ultimately a non-vital procedure. This prolonged immunosuppressive treatment is associated with a range of adverse reactions. At present, the focus of composite tissue transplant research lies in the field of immunology . VCA has been successfully used for burn reconstruction since 2003, providing a solution for patients with severe deformities. Traditional reconstructive methods following extensive facial burns often result in suboptimal functional and aesthetic outcomes. The reconstruction of burn injuries affecting central facial units or uniquely intricate structures, such as the eyelids, remains particularly challenging . However, its safety in this population has been a concern due to the high levels of alloimmunization resulting from tissue exposure during burn care. Pre-VCA factors, such as multiple transfusions, prior transplants, pregnancies, the use of skin allografts, and mechanical support devices, can lead to sensitization. This may ultimately exclude burn patients—those who could benefit the most from VCA—from receiving hand or face transplants . Vascularized composite allotransplantation (VCA) for burn reconstruction has been associated with an increased risk of 1-year mortality and an increased number of episodes of acute rejection compared to other VCA-candidate patients. Future research should aim to identify the unique risk factors specific to burn patients undergoing VCA and explore the relationship between antigenic burdens and surgical outcomes. Although burn patients currently represent a minority of VCA recipients, they constitute the largest demographic on waiting lists due to the high burden of donor-specific antibodies, which complicates donor matching . The current goal is to reduce immunosuppression to a level where both the functionality of the allograft is preserved, and adverse immune reactions are minimized. The long-term objective is to achieve donor-specific immune tolerance, where the immune system no longer reacts to the donor’s antigens, eliminating the need for ongoing immunosuppressant therapy . Burn wound therapy is experiencing a major transformation with the integration of advanced technologies and novel therapeutic strategies. Cutting-edge techniques are enhancing patient outcomes by promoting faster and more effective wound healing, lowering infection risks, and minimizing scar formation. While these innovations show substantial promise, several challenges remain in their implementation within clinical settings. These include the need for standardized protocols, high costs, and the requirement for specialized equipment and training. Moreover, the variability in patient conditions, wound types, and response to treatment further complicates the widespread adoption of these methods. Continued research and clinical trials are crucial to further refine these methods, optimize their effectiveness, and make them more accessible to a broader range of patients. As these technologies advance, the future of burn care could bring dramatic improvements in both the functional recovery and aesthetic appearance of healed burn wounds, ultimately enhancing the quality of life for patients.
Using multi-omics to explore the effect of
17e9ecab-6078-4e8a-8bc6-21d5f3d5525c
11058974
Microbiology[mh]
As an abiotic stress, nutritional stress seriously influences plant growth and yield. Currently, the loss, imbalance, and shortage of soil nutrients caused by unsustainable farming systems are becoming increasingly extensive and are not conducive to the improvement of crop yield. Plant growth and development require adequate and balanced mineral nutrition. There are a variety of mineral nutrients in soil, which affect several cellular and metabolic processes (Gupta et al. ). In addition, a deficiency in mineral elements has seriously restricted the growth of crops around the world (Haefele et al. ). Macroelements and microelements have positive effects on plant growth and development. Deficiencies in nitrogen (N) and phosphorus (P) are considered to be the most serious causes of loss of agricultural production globally (Rose et al. ). Nitrogen deficiency can lead to the decrease of chlorophyll content, inhibit photosynthesis, and reduce dry matter accumulation in plants (Du et al. ). Phosphorous deficiency reduces photosynthetic capacity and darkens the plant leaves (Sun et al. ). Nutrient deficiency is a complex problem, and plants often lack not just one element but multiple elements. Therefore, to resolve nutrient deficiency, the use of chemical fertilizers has increased dramatically, leading to soil health and environmental problems. More and more studies have revealed the potential of microorganisms in agriculture (Liu et al. ). Plant growth-promoting rhizosphere bacteria (PGPRs) are considered to be an effective method for maintaining agricultural productivity and promoting the sustainable development of agriculture. A previous study showed that PGPRs play an great role in improving plants resistance to abiotic stress (Gupta et al. ), such as by inducing the antioxidant system and increasing the activity of antioxidant enzymes and the accumulation of osmotic regulatory substances (El-Esawi et al. ). Furthermore, PGPRs has the ability to maintain water balance, dissolve phosphorus and potassium, fix nitrogen, and chelate iron in plants (Santoyo et al. ). In addition, PGPR also plays a key role in regulating plant rhizosphere microbial communities. Recently, studies have shown that PGPRs can alter the soil rhizosphere microbial community. For example, inoculation of two kinds of PGPR altered the microbial community of the rhizosphere soil of Taxus chinensis var. mairei (Bai et al. ). Moreover, studies have shown that several PGPRs can promote plant growth and improve plant tolerance to stress through multiple complex mechanisms (Shameer and Prasad ). Metagenomic analysis has enabled a deeper understanding of the classification and functional diversity of soil microbial communities (Du et al. ). Metagenomic sequencing has been used to identify microorganisms that are differentially abundant in soils compared with soils not inoculated with PGPRs. Metabonomics provides detailed metabolite map information and can identify metabolic pathways of different concentrates, which helps to reveal the mechanism of interaction between microbial communities and the environment (Liu et al. ). Previously, many nontargeted metabolomics studies have been conducted in many crops to explain the mechanisms of tolerance to different stresses (Zhao et al. ). Currently, our understanding of the changes in the rhizosphere microbial community and rhizosphere metabolites of plants inoculated with PGPRs under nutrient deficiency is very limited. Therefore, to further understand the mechanism by which PGPRs improve plant resistance, metagenomic and metabolomic analyses are needed in addition to the determination of physiological and biochemical indices. In this experiment, lettuce was used as the research material, and the PGPR strain Bacillus velezensis SAAS-63 was used as the inoculant. Different concentrations of nutrient solution were used to establish nutrient-deficient and nutrition-sufficient conditions for pot experiments. The objectives of this study included (1) to explore the effects of this strain on lettuce growth under nutritional stress, (2) to measure the capacity of this strain on the reactive oxygen species (ROS) scavenging and osmotic regulation in lettuce under nutritional stress, (3) to explore the effects of strain SAAS-63 inoculation on nutrient accumulation in lettuce, and (4) to reveal the mechanism by which B. velezensis SAAS-63 helps plants resist nutritional stress from multiple perspectives. Biological material and plant materials Strain SAAS-63 is a PGPR isolated and screened from the plant rhizosphere. It was identified as B. velezensis . The strain SAAS-63 is preserved at Guangdong Microbial Culture Collection Center and the preservation number is GDMCC No: 63201. Strain B. velezensis SAAS-63 was used for inoculation in liquid Luria–Bertani (LB) medium. In order to avoid the influence of irrelevant factors such as LB medium on the experimental results, in this study, the cultured bacterial solution was centrifuged and added to sterile water to prepare bacterial suspension. The lettuce ( Lactuca sativa Linn) seeds used in this experiment obtained from the Biotechnology Research Institute Shanghai Academy of Agricultural Sciences, China. The seeds were surface-sterilized and washed with sterile distilled water. The seeds were germinated until they grow to the three-leaf stage. Then, we transplanted them into pots for culture. The control conditions of the artificial climate chamber were as follows: temperature controlled at 25 °C, relative humidity maintained at 60%, and 14 h/10 h light/dark cycle. Nutrient stress treatments and experimental design Seedlings were transferred and cultured into pots containing vermiculite and saturated with complete nutrients for the nutrient-sufficient treatment and with water for the nutrient-deficient treatment. The nutrient solution was prepared according to Bisht’s method (Bisht et al. ). The original soil had pH 6.81, nitrogen 38.4 mg/kg, phosphorus 1.6 mg/kg, potassium 45.1 mg/kg, calcium 178.5 mg/kg, manganese 1.8 mg/kg, and zinc 0.4 mg/kg, all of which were at the deficiency level. This experimental treatment was based on a completely randomized design and carried out 10 biological replicates. The two factors of experimental treatment were as follows: (1) the first factor consisted of two nutrient condition treatments, namely, (i) nutrient-deficient condition and (ii) nutrient-sufficient condition. (2) The second factor included two bacterial inoculation treatments, namely, (i) inoculated B. velezensis SAAS-63 and (ii) non-inoculated B. velezensis SAAS-63. The lettuce at the three-leaf stage was treated experimentally. When lettuce grew to the three-leaf stage, nutrition treatment (50 mL) and bacterial treatment (50 mL) were carried out every week for 4 weeks. Bacterial treatment began after three days of nutrition treatment. Samples were collected 5 days after the last inoculation. Morpho-physiological parameters and biochemical index measurement The samples were harvested for 30 days after the first treatment. The fresh weight, leaf circumference, shoot length, and other main characters of lettuce were determined after the end of the experiment. Ten plants were randomly selected for each treatment. The total soluble sugars (SS) in the plants were determined following the anthrone method of Fales . The absorbance was measured at 630 nm. Antioxidant enzymes such as superoxide dismutase (SOD), peroxidase (POD), and catalase (CAT) activities were measured using assay kits (Solarbio, Beijing, China). The proline (Pro) and malondialdehyde (MDA) contents were also determined using assay kits (Solarbio, Beijing, China). All measurements were performed according to the manufacturer’s instructions. Plant nutrient analysis Total N in the lettuce samples were measured by the Kjeldahl method (Li et al. ). For elemental analysis (P, K, Ca, Mn, and Zn), each treated dry tissue sample was digested with HNO 3 . After the sample was digested, the element content was determined by inductively coupled plasma-emission spectrometry and inductively coupled plasma-mass spectrometry. Soil sample collection The lettuce plants were removed from the pots after the bacterial and nutrient stress treatments. Large pieces of soil around the lettuce roots were removed and the lettuce rhizosphere carefully removed with a sterile brush. Plants rhizosphere soil samples were collected from the approximately 1-mm-thick soil layer attached to the roots of plants. Three pots were randomly selected from ten replicates, and the rhizosphere soil was passed through an 80-mesh sieve and mixed into a bulk sample. Finally, the collected soil samples were stored in 2-mL test tubes and preserved at – 80 °C for metagenomic and metabolomic analyses (three replicates) at Shanghai Majorbio Bio-Pharm Technology Co. Ltd. Shanghai, China. DNA extraction, Illumina NovaSeq sequencing, and data analysis Genomic DNA was extracted from soil samples, and then, its concentration was tested using agarose gel electrophoresis. The V3-V4 region of the bacterial 16 s rRNA gene was amplified by 341F and 806R primers, respectively (Imparato et al. ). The ITS1-5F regions of the fungal ITS genes were amplified using the respective primer pairs ITS5-1737F and ITS1-2043R (Zhang et al. ). Sequencing libraries were constructed using the NEXTFLEX™ Rapid DNA-Seq Kit (Waltham, MA, USA). Sequencing was performed using the NovaSeq platform at Majorbio Bio-Pharm Technology Co. Ltd. (Shanghai, China). The soil microbial community dataset has been deposited into the NCBI Sequence Read Archive under accession number PRJNA1039622. CD-HIT software ( http://www.bioinformatics.org/cd-hit/ ) was used to cluster and construct a non-redundant gene set to explore the commonalities and differences between different samples (Fu et al. ). SOAPaligner ( http://soap.genomics.org.cn/ ) was used to compare the high-quality reads for each sample with the non-redundant gene set software and to calculate the gene abundance information in the corresponding sample (Li et al. ). The alpha diversity index (Simpson) was calculated to assess microbial diversity. Principal component analysis (PCA) was used for rhizosphere microbial composition analysis and differential metabolite analysis. PCA analysis in metagenome was performed using the ropls package in R ( https://rdrr.io/bioc/ropls/man/ropls-package.html ). Rhizosphere soil metabolite analysis The rhizosphere soil samples of no inoculation under nutrient-deficient condition (CKN), inoculation under nutrient-deficient condition (SAAS-63N), no inoculation under nutrient-sufficient condition (CKS), and inoculation under nutrient-sufficient condition (SAAS-63S) were frozen and stored in 1-mL Eppendorf tubes. Sample preparation for the metabonomic analysis was performed at Majorbio Bio-Pharm Technology Co. Ltd. (Shanghai, China) using standard procedures. The analytical system of UHPLC-MS/MS consists of ultra-high-performance liquid chromatography (UHPLC) system (Thermo Fisher, Waltham, MA, USA) and Q-accurate HF-X Fourier transform mass spectrometer. Raw data files generated by UHPLC-MS/MS were processed using the ProgenesisQI (Waters Corporation, Milford, MA, USA) for baseline filtering, peak identification, integration, peak alignment, and metabolite quantitation. Metabolites were annotated using the Kyoto Encyclopedia of Genes and Genomes (KEGG) database (Ogata et al. ) and the human metabolome database (HMDB database, https://hmdb.ca/ ). PCA analysis in metabonomics was performed using the stats package in R ( https://rdrr.io/r/stats/stats-package.html ). Differentially accumulated metabolites (DAMs) were distinguished using PCA and orthogonal projections to latent structures-discriminant analysis (OPLS-DA) model. The DAMs were screened according to fold change ≥ 2. Finally, the KEGG database was used for the pathway enrichment analysis of DAMs. The metabolite volcano plot was generated using the ggplot2 package in R ( https://rdrr.io/cran/ggplot2/ ). Statistical analysis The significance of physiological and biochemical indexes of different treatments of lettuce was calculated by SPSS 26.0 (IBM Corp., Armonk, NY, USA). Differences in all treatments were assessed for significance using one-way analysis of variance (ANOVA). Figures were generated using Origin 2021 software (Origin Lab, Northampton, MA, USA). Statistically significant differences were defined at p < 0.05. Strain SAAS-63 is a PGPR isolated and screened from the plant rhizosphere. It was identified as B. velezensis . The strain SAAS-63 is preserved at Guangdong Microbial Culture Collection Center and the preservation number is GDMCC No: 63201. Strain B. velezensis SAAS-63 was used for inoculation in liquid Luria–Bertani (LB) medium. In order to avoid the influence of irrelevant factors such as LB medium on the experimental results, in this study, the cultured bacterial solution was centrifuged and added to sterile water to prepare bacterial suspension. The lettuce ( Lactuca sativa Linn) seeds used in this experiment obtained from the Biotechnology Research Institute Shanghai Academy of Agricultural Sciences, China. The seeds were surface-sterilized and washed with sterile distilled water. The seeds were germinated until they grow to the three-leaf stage. Then, we transplanted them into pots for culture. The control conditions of the artificial climate chamber were as follows: temperature controlled at 25 °C, relative humidity maintained at 60%, and 14 h/10 h light/dark cycle. Seedlings were transferred and cultured into pots containing vermiculite and saturated with complete nutrients for the nutrient-sufficient treatment and with water for the nutrient-deficient treatment. The nutrient solution was prepared according to Bisht’s method (Bisht et al. ). The original soil had pH 6.81, nitrogen 38.4 mg/kg, phosphorus 1.6 mg/kg, potassium 45.1 mg/kg, calcium 178.5 mg/kg, manganese 1.8 mg/kg, and zinc 0.4 mg/kg, all of which were at the deficiency level. This experimental treatment was based on a completely randomized design and carried out 10 biological replicates. The two factors of experimental treatment were as follows: (1) the first factor consisted of two nutrient condition treatments, namely, (i) nutrient-deficient condition and (ii) nutrient-sufficient condition. (2) The second factor included two bacterial inoculation treatments, namely, (i) inoculated B. velezensis SAAS-63 and (ii) non-inoculated B. velezensis SAAS-63. The lettuce at the three-leaf stage was treated experimentally. When lettuce grew to the three-leaf stage, nutrition treatment (50 mL) and bacterial treatment (50 mL) were carried out every week for 4 weeks. Bacterial treatment began after three days of nutrition treatment. Samples were collected 5 days after the last inoculation. The samples were harvested for 30 days after the first treatment. The fresh weight, leaf circumference, shoot length, and other main characters of lettuce were determined after the end of the experiment. Ten plants were randomly selected for each treatment. The total soluble sugars (SS) in the plants were determined following the anthrone method of Fales . The absorbance was measured at 630 nm. Antioxidant enzymes such as superoxide dismutase (SOD), peroxidase (POD), and catalase (CAT) activities were measured using assay kits (Solarbio, Beijing, China). The proline (Pro) and malondialdehyde (MDA) contents were also determined using assay kits (Solarbio, Beijing, China). All measurements were performed according to the manufacturer’s instructions. Total N in the lettuce samples were measured by the Kjeldahl method (Li et al. ). For elemental analysis (P, K, Ca, Mn, and Zn), each treated dry tissue sample was digested with HNO 3 . After the sample was digested, the element content was determined by inductively coupled plasma-emission spectrometry and inductively coupled plasma-mass spectrometry. The lettuce plants were removed from the pots after the bacterial and nutrient stress treatments. Large pieces of soil around the lettuce roots were removed and the lettuce rhizosphere carefully removed with a sterile brush. Plants rhizosphere soil samples were collected from the approximately 1-mm-thick soil layer attached to the roots of plants. Three pots were randomly selected from ten replicates, and the rhizosphere soil was passed through an 80-mesh sieve and mixed into a bulk sample. Finally, the collected soil samples were stored in 2-mL test tubes and preserved at – 80 °C for metagenomic and metabolomic analyses (three replicates) at Shanghai Majorbio Bio-Pharm Technology Co. Ltd. Shanghai, China. Genomic DNA was extracted from soil samples, and then, its concentration was tested using agarose gel electrophoresis. The V3-V4 region of the bacterial 16 s rRNA gene was amplified by 341F and 806R primers, respectively (Imparato et al. ). The ITS1-5F regions of the fungal ITS genes were amplified using the respective primer pairs ITS5-1737F and ITS1-2043R (Zhang et al. ). Sequencing libraries were constructed using the NEXTFLEX™ Rapid DNA-Seq Kit (Waltham, MA, USA). Sequencing was performed using the NovaSeq platform at Majorbio Bio-Pharm Technology Co. Ltd. (Shanghai, China). The soil microbial community dataset has been deposited into the NCBI Sequence Read Archive under accession number PRJNA1039622. CD-HIT software ( http://www.bioinformatics.org/cd-hit/ ) was used to cluster and construct a non-redundant gene set to explore the commonalities and differences between different samples (Fu et al. ). SOAPaligner ( http://soap.genomics.org.cn/ ) was used to compare the high-quality reads for each sample with the non-redundant gene set software and to calculate the gene abundance information in the corresponding sample (Li et al. ). The alpha diversity index (Simpson) was calculated to assess microbial diversity. Principal component analysis (PCA) was used for rhizosphere microbial composition analysis and differential metabolite analysis. PCA analysis in metagenome was performed using the ropls package in R ( https://rdrr.io/bioc/ropls/man/ropls-package.html ). The rhizosphere soil samples of no inoculation under nutrient-deficient condition (CKN), inoculation under nutrient-deficient condition (SAAS-63N), no inoculation under nutrient-sufficient condition (CKS), and inoculation under nutrient-sufficient condition (SAAS-63S) were frozen and stored in 1-mL Eppendorf tubes. Sample preparation for the metabonomic analysis was performed at Majorbio Bio-Pharm Technology Co. Ltd. (Shanghai, China) using standard procedures. The analytical system of UHPLC-MS/MS consists of ultra-high-performance liquid chromatography (UHPLC) system (Thermo Fisher, Waltham, MA, USA) and Q-accurate HF-X Fourier transform mass spectrometer. Raw data files generated by UHPLC-MS/MS were processed using the ProgenesisQI (Waters Corporation, Milford, MA, USA) for baseline filtering, peak identification, integration, peak alignment, and metabolite quantitation. Metabolites were annotated using the Kyoto Encyclopedia of Genes and Genomes (KEGG) database (Ogata et al. ) and the human metabolome database (HMDB database, https://hmdb.ca/ ). PCA analysis in metabonomics was performed using the stats package in R ( https://rdrr.io/r/stats/stats-package.html ). Differentially accumulated metabolites (DAMs) were distinguished using PCA and orthogonal projections to latent structures-discriminant analysis (OPLS-DA) model. The DAMs were screened according to fold change ≥ 2. Finally, the KEGG database was used for the pathway enrichment analysis of DAMs. The metabolite volcano plot was generated using the ggplot2 package in R ( https://rdrr.io/cran/ggplot2/ ). The significance of physiological and biochemical indexes of different treatments of lettuce was calculated by SPSS 26.0 (IBM Corp., Armonk, NY, USA). Differences in all treatments were assessed for significance using one-way analysis of variance (ANOVA). Figures were generated using Origin 2021 software (Origin Lab, Northampton, MA, USA). Statistically significant differences were defined at p < 0.05. Growth-promoting effect of B. velezensis SAAS-63 under nutrient deficiency Under nutrient-deficient conditions, the lettuce plants were small and their leaves grew slowly, while under nutrient-sufficient conditions, the plants grew better. After inoculation with strain SAAS-63, lettuce growth was significantly promoted under both nutritional conditions (Fig. a). Under nutrient-deficient conditions, the fresh weight, leaf circumference, and shoot length of the lettuce inoculated with strain SAAS-63 increased by 43.69%, 10.45%, and 60.97%, respectively. Under nutrient-sufficient conditions, the fresh weight, leaf circumference, and shoot length of the lettuce inoculated with strain SAAS-63 increased by 18.32%, 12.55%, and 25.85%, respectively (Fig. b–d). The results indicate that the inoculated strain SAAS-63 can promote plant growth under either condition, but especially under nutrient-deficient conditions, and inoculation with B. velezensis SAAS-63 can improve plant growth. Effect of B. velezensis SAAS-63 on phytochemicals in lettuce Nutrient deficiency leads to plant dysplasia and inhibits the normal function of plants. We conducted a series of experiments to determine the effects of nutrient stress and PGPR inoculation on lettuce. The plants produced a large amount of MDA under nutrient deficiency but produced a small amount of MDA when nutrients were sufficient (Fig. a). The MDA content of plants under the two nutrient conditions inoculated with B. velezensis SAAS-63 decreased by 68.35% and 75.30% compared with CK. Under the two nutrient conditions, the Pro content of the plants after inoculation with B. velezensis SAAS-63 increased by 281.99% and 291.43% compared with CK (Fig. b). The SS content in the plants under nutrient-sufficient conditions was significantly lower than that under nutrient-deficient conditions, and the SS content in the plants under the two conditions was reduced by 43.43% and 13.77% following inoculation with B. velezensis SAAS-63 (Fig. c). Under different nutrient conditions, the activity of antioxidant enzymes in the plants differed. Compared with nutrient-sufficient conditions, the activity of antioxidant enzymes in the plants increased under nutrient deficiency, and inoculation with B. velezensis SAAS-63 greatly improved the activity of antioxidant enzymes in the plants under nutrient-deficient conditions. Under nutrient-deficient conditions, the activities of SOD, POD, and CAT increased by 56.26%, 97.06%, and 160.21%, respectively, after inoculation with B. velezensis SAAS-63 (Fig. d–f). Effect of B. velezensis SAAS-63 on nutrient accumulation in plant roots and leaves To explore the effect of B. velezensis SAAS-63 inoculation on the accumulation of nutrients in the plants under different nutrient conditions, the contents of macronutrients and micronutrients were measured. The accumulation of macronutrients in the plants showed a similar pattern under nutrient-deficient conditions was that macronutrients accumulated in large quantities in the roots but at low levels in the leaves (Fig. a–c). By contrast, under nutrient-sufficient conditions, inoculation with B. velezensis SAAS-63 caused the accumulation of macronutrients in the roots to be significantly reduced compared with nutrient-deficient conditions, whereas macronutrient accumulation significantly increased in the leaves. Under nutrient-deficient conditions and inoculation with B. velezensis SAAS-63, the contents of N, P, and K in the roots decreased by 89.11%, 34.58%, and 49.56%, respectively, while the contents of N, P, and K in the leaves increased by 120.31%, 19.75%, and 10.47%, respectively. The accumulation of micronutrients also changed after inoculation with B. velezensis SAAS-63 (Fig. d–f). Under nutrient-deficient conditions, the contents of Ca, Mn, and Zn in the roots inoculated with B. velezensis SAAS-63 decreased by 58.35%, 89.09%, and 19.91%, respectively, and the contents of Ca, Mn, Zn, and Cu in the leaves increased by 27.50%, 120.12%, and 13.79%, respectively. In contrast to the macronutrients, the accumulation of micronutrients in the roots increased following inoculation with B. velezensis SAAS-63 under nutrient-sufficient conditions, which did not affect the accumulation of micronutrients in the leaves, and these values were still significantly higher than in the non-inoculation treatment. Changes in microbial community composition and diversity following B. velezensis SAAS-63 inoculation In this study, the diversity of soil microorganisms was analyzed. The Simpson index of CKN was the lowest. Inoculation with strain SAAS-63 significantly increased the Simpson index of the rhizosphere soil microorganisms under nutrient deficiency (Fig. a). The results showed that inoculation with B. velezensis SAAS-63 increased soil microbial diversity. The PCA based on the microbial taxonomic level (genus) and functional classification showed that different groups of treatments under different nutrient conditions formed different clusters (Fig. b). The results showed that different nutrient conditions and inoculation with B. velezensis SAAS-63 influenced the composition and function of the microbial community in the plant rhizosphere. According to the analysis of the relative abundance of species at the genus level, the community structure of the rhizosphere microorganisms also changed (Fig. c). To determine the effect of B. velezensis SAAS-63 inoculation under nutrient-deficient conditions, microorganisms with a significant increase in relative abundance following B. velezensis SAAS-63 inoculation were searched, and the relative abundance of four beneficial microbial genera increased significantly, suggesting that these microorganisms help plants cope with nutrient deficiency (Fig. d). Under nutrient-deficient conditions, B. velezensi s SAAS-63 inoculation significantly increased the abundance of the beneficial taxa Streptomyces , Actinoallomurus , Verrucomicrobia , and Chloroflexi . Metabolic differences in rhizosphere soil across B. velezensis SAAS-63 treatments The changes in metabolites in the rhizosphere soil were studied using nontargeted metabonomics. Inoculation with B. velezensis SAAS-63 caused changes in metabolites in the rhizosphere soil. The PCA of the metabolite spectrum showed that the first principal component and the second spectrum component significantly separated the metabolic profiles of the two treatment groups under nutrient-deficient conditions. PC1 and PC2 explained more than 57% of the variability and were mainly distinguished by PC1 (Fig. a). An OPLS-DA was performed, and the score showed that different treatments also displayed significant segregation (Fig. b). Comparing the changes in the metabolites in the two groups under nutrient deficiency, 376 metabolites were significantly upregulated and 337 metabolites were significantly downregulated under inoculation with B. velezensis SAAS-63 compared with the CK (Fig. c). A KEGG enrichment analysis was performed in the two treatment groups under nutrient-deficient conditions. The pathways of significant enrichment of the top four metabolites in the two treatment groups were compared, and three pathways related to phenylpropane metabolism were found (Fig. d). Therefore, we paid greater attention to the effect of B. velezensis SAAS-63 inoculation on the phenylpropanoid biosynthesis pathway in subsequent analyses. Phenylpropanoid metabolism is one of the most important secondary metabolic pathways in plants. As the results of the KEGG enrichment analysis indicated significantly enriched phenylpropanoid metabolism, we analyzed the metabolites of this pathway (Fig. ). The abundance of metabolites from different branches of this pathway changed after inoculation with B. velezensis SAAS-63 under nutrient deficiency. Phenylpropanoid metabolism starts with phenylalanine. At the beginning of phenylpropanoid metabolism, the abundance of phenylalanine increased significantly after B. velezensis SAAS-63 inoculation. As a substrate, the increase in the abundance of phenylalanine increased the abundance of metabolites in different metabolic branches. The abundance of sinapic acid and fraxetin, two metabolites from the phenylpropanoid metabolic branch, also increased after inoculation with B. velezensis SAAS-63. On the contrary, metabolites (including flavone and isoflavone) of another branch of phenylpropanoid metabolism were significantly decreased, suggesting the redirection of metabolic flux between the branches of the phenylpropanoid pathway. The KEGG pathways associated with representative metabolites were primarily involved in lipid metabolism, carbohydrate metabolism, and energy metabolism, among other areas (Supplemental Fig. ). Combined “omics” methods revealed that differentially enriched KEGG pathways in the SAAS-63N metagenomes, compared to the CKN metagenomes, included starch and sucrose metabolism (ko00500), which is a component of carbohydrate metabolism (Fig. ). After inoculation with B. velezensis SAAS-63, the abundance of many genes encoding starch and sucrose metabolic pathway enzymes changed (Supplemental Fig. ). The abundance of the gene TREH encoding trehalose hydrolase (EC3.2.1.28) in the SAAS-63N metagenome was significantly higher than that in the CKN metagenome (Fig. ). In addition, we searched for the associated metabolites of the enzymes whose gene abundances were altered and found that the trehalose content decreased significantly after inoculation with strain SAAS-63. The results showed that strain SAAS-63 inoculation could recruit specific rhizosphere microorganisms to use trehalose to re-establish the pathway of carbon (C) metabolism in the rhizosphere. Relationship between metabolites and rhizosphere soil microorganisms Explaining the relationship between microbes and metabolites is a necessary step in exploring how PGPRs improve plant resistance to nutrient deficiency. Therefore, we searched for correlations between the top 10 genera and obtained DAMs. Significant ( p < 0.05) and highly significant ( p < 0.01) correlations were observed between certain rhizosphere microbes and differential metabolites (Fig. ). Phenylalanine correlated significantly with Actinoallomurus ( r = 0.8287, p < 0.001) and Rugosimonospora ( r = 0.9041, p < 0.001). Flavone was highly positively correlated with Mesorhizobium ( r = 0.8233, p < 0.001) and Gemmatimonadetes ( r = 0.9571, p < 0.001). Sinapic acid was positively correlated with Verrucomicrobia ( r = 0.7282, p < 0.01) and Actinoallomurus ( r = 0.8229, p < 0.01). Fraxetin was positively correlated with Verrucomicrobia ( r = 0.7632, p < 0.01) and Actinoallomurus ( r = 0.6477, p < 0.05). It is worth noting that trehalose had a significant positive correlation with most rhizosphere microorganisms. Meanwhile, trehalose showed a highly positive correlation with Mesorhizobium ( r = 0.8326, p < 0.001), Proteobacteria ( r = 0.9106, p < 0.001), Hyphomicrobiales ( r = 0.8335, p < 0.001), and Deltaproteobacteria ( r = 0.8388, p < 0.001). This may explain the significant decrease in trehalose in the rhizosphere following B. velezensis SAAS-63 inoculation. The results showed that the relative abundance of six microorganisms had the strongest correlation with certain metabolites in the rhizosphere, indicating that these microorganisms may be involved in the formation of most of the metabolites in the soil. B. velezensis SAAS-63 under nutrient deficiency Under nutrient-deficient conditions, the lettuce plants were small and their leaves grew slowly, while under nutrient-sufficient conditions, the plants grew better. After inoculation with strain SAAS-63, lettuce growth was significantly promoted under both nutritional conditions (Fig. a). Under nutrient-deficient conditions, the fresh weight, leaf circumference, and shoot length of the lettuce inoculated with strain SAAS-63 increased by 43.69%, 10.45%, and 60.97%, respectively. Under nutrient-sufficient conditions, the fresh weight, leaf circumference, and shoot length of the lettuce inoculated with strain SAAS-63 increased by 18.32%, 12.55%, and 25.85%, respectively (Fig. b–d). The results indicate that the inoculated strain SAAS-63 can promote plant growth under either condition, but especially under nutrient-deficient conditions, and inoculation with B. velezensis SAAS-63 can improve plant growth. B. velezensis SAAS-63 on phytochemicals in lettuce Nutrient deficiency leads to plant dysplasia and inhibits the normal function of plants. We conducted a series of experiments to determine the effects of nutrient stress and PGPR inoculation on lettuce. The plants produced a large amount of MDA under nutrient deficiency but produced a small amount of MDA when nutrients were sufficient (Fig. a). The MDA content of plants under the two nutrient conditions inoculated with B. velezensis SAAS-63 decreased by 68.35% and 75.30% compared with CK. Under the two nutrient conditions, the Pro content of the plants after inoculation with B. velezensis SAAS-63 increased by 281.99% and 291.43% compared with CK (Fig. b). The SS content in the plants under nutrient-sufficient conditions was significantly lower than that under nutrient-deficient conditions, and the SS content in the plants under the two conditions was reduced by 43.43% and 13.77% following inoculation with B. velezensis SAAS-63 (Fig. c). Under different nutrient conditions, the activity of antioxidant enzymes in the plants differed. Compared with nutrient-sufficient conditions, the activity of antioxidant enzymes in the plants increased under nutrient deficiency, and inoculation with B. velezensis SAAS-63 greatly improved the activity of antioxidant enzymes in the plants under nutrient-deficient conditions. Under nutrient-deficient conditions, the activities of SOD, POD, and CAT increased by 56.26%, 97.06%, and 160.21%, respectively, after inoculation with B. velezensis SAAS-63 (Fig. d–f). B. velezensis SAAS-63 on nutrient accumulation in plant roots and leaves To explore the effect of B. velezensis SAAS-63 inoculation on the accumulation of nutrients in the plants under different nutrient conditions, the contents of macronutrients and micronutrients were measured. The accumulation of macronutrients in the plants showed a similar pattern under nutrient-deficient conditions was that macronutrients accumulated in large quantities in the roots but at low levels in the leaves (Fig. a–c). By contrast, under nutrient-sufficient conditions, inoculation with B. velezensis SAAS-63 caused the accumulation of macronutrients in the roots to be significantly reduced compared with nutrient-deficient conditions, whereas macronutrient accumulation significantly increased in the leaves. Under nutrient-deficient conditions and inoculation with B. velezensis SAAS-63, the contents of N, P, and K in the roots decreased by 89.11%, 34.58%, and 49.56%, respectively, while the contents of N, P, and K in the leaves increased by 120.31%, 19.75%, and 10.47%, respectively. The accumulation of micronutrients also changed after inoculation with B. velezensis SAAS-63 (Fig. d–f). Under nutrient-deficient conditions, the contents of Ca, Mn, and Zn in the roots inoculated with B. velezensis SAAS-63 decreased by 58.35%, 89.09%, and 19.91%, respectively, and the contents of Ca, Mn, Zn, and Cu in the leaves increased by 27.50%, 120.12%, and 13.79%, respectively. In contrast to the macronutrients, the accumulation of micronutrients in the roots increased following inoculation with B. velezensis SAAS-63 under nutrient-sufficient conditions, which did not affect the accumulation of micronutrients in the leaves, and these values were still significantly higher than in the non-inoculation treatment. B. velezensis SAAS-63 inoculation In this study, the diversity of soil microorganisms was analyzed. The Simpson index of CKN was the lowest. Inoculation with strain SAAS-63 significantly increased the Simpson index of the rhizosphere soil microorganisms under nutrient deficiency (Fig. a). The results showed that inoculation with B. velezensis SAAS-63 increased soil microbial diversity. The PCA based on the microbial taxonomic level (genus) and functional classification showed that different groups of treatments under different nutrient conditions formed different clusters (Fig. b). The results showed that different nutrient conditions and inoculation with B. velezensis SAAS-63 influenced the composition and function of the microbial community in the plant rhizosphere. According to the analysis of the relative abundance of species at the genus level, the community structure of the rhizosphere microorganisms also changed (Fig. c). To determine the effect of B. velezensis SAAS-63 inoculation under nutrient-deficient conditions, microorganisms with a significant increase in relative abundance following B. velezensis SAAS-63 inoculation were searched, and the relative abundance of four beneficial microbial genera increased significantly, suggesting that these microorganisms help plants cope with nutrient deficiency (Fig. d). Under nutrient-deficient conditions, B. velezensi s SAAS-63 inoculation significantly increased the abundance of the beneficial taxa Streptomyces , Actinoallomurus , Verrucomicrobia , and Chloroflexi . B. velezensis SAAS-63 treatments The changes in metabolites in the rhizosphere soil were studied using nontargeted metabonomics. Inoculation with B. velezensis SAAS-63 caused changes in metabolites in the rhizosphere soil. The PCA of the metabolite spectrum showed that the first principal component and the second spectrum component significantly separated the metabolic profiles of the two treatment groups under nutrient-deficient conditions. PC1 and PC2 explained more than 57% of the variability and were mainly distinguished by PC1 (Fig. a). An OPLS-DA was performed, and the score showed that different treatments also displayed significant segregation (Fig. b). Comparing the changes in the metabolites in the two groups under nutrient deficiency, 376 metabolites were significantly upregulated and 337 metabolites were significantly downregulated under inoculation with B. velezensis SAAS-63 compared with the CK (Fig. c). A KEGG enrichment analysis was performed in the two treatment groups under nutrient-deficient conditions. The pathways of significant enrichment of the top four metabolites in the two treatment groups were compared, and three pathways related to phenylpropane metabolism were found (Fig. d). Therefore, we paid greater attention to the effect of B. velezensis SAAS-63 inoculation on the phenylpropanoid biosynthesis pathway in subsequent analyses. Phenylpropanoid metabolism is one of the most important secondary metabolic pathways in plants. As the results of the KEGG enrichment analysis indicated significantly enriched phenylpropanoid metabolism, we analyzed the metabolites of this pathway (Fig. ). The abundance of metabolites from different branches of this pathway changed after inoculation with B. velezensis SAAS-63 under nutrient deficiency. Phenylpropanoid metabolism starts with phenylalanine. At the beginning of phenylpropanoid metabolism, the abundance of phenylalanine increased significantly after B. velezensis SAAS-63 inoculation. As a substrate, the increase in the abundance of phenylalanine increased the abundance of metabolites in different metabolic branches. The abundance of sinapic acid and fraxetin, two metabolites from the phenylpropanoid metabolic branch, also increased after inoculation with B. velezensis SAAS-63. On the contrary, metabolites (including flavone and isoflavone) of another branch of phenylpropanoid metabolism were significantly decreased, suggesting the redirection of metabolic flux between the branches of the phenylpropanoid pathway. The KEGG pathways associated with representative metabolites were primarily involved in lipid metabolism, carbohydrate metabolism, and energy metabolism, among other areas (Supplemental Fig. ). Combined “omics” methods revealed that differentially enriched KEGG pathways in the SAAS-63N metagenomes, compared to the CKN metagenomes, included starch and sucrose metabolism (ko00500), which is a component of carbohydrate metabolism (Fig. ). After inoculation with B. velezensis SAAS-63, the abundance of many genes encoding starch and sucrose metabolic pathway enzymes changed (Supplemental Fig. ). The abundance of the gene TREH encoding trehalose hydrolase (EC3.2.1.28) in the SAAS-63N metagenome was significantly higher than that in the CKN metagenome (Fig. ). In addition, we searched for the associated metabolites of the enzymes whose gene abundances were altered and found that the trehalose content decreased significantly after inoculation with strain SAAS-63. The results showed that strain SAAS-63 inoculation could recruit specific rhizosphere microorganisms to use trehalose to re-establish the pathway of carbon (C) metabolism in the rhizosphere. Explaining the relationship between microbes and metabolites is a necessary step in exploring how PGPRs improve plant resistance to nutrient deficiency. Therefore, we searched for correlations between the top 10 genera and obtained DAMs. Significant ( p < 0.05) and highly significant ( p < 0.01) correlations were observed between certain rhizosphere microbes and differential metabolites (Fig. ). Phenylalanine correlated significantly with Actinoallomurus ( r = 0.8287, p < 0.001) and Rugosimonospora ( r = 0.9041, p < 0.001). Flavone was highly positively correlated with Mesorhizobium ( r = 0.8233, p < 0.001) and Gemmatimonadetes ( r = 0.9571, p < 0.001). Sinapic acid was positively correlated with Verrucomicrobia ( r = 0.7282, p < 0.01) and Actinoallomurus ( r = 0.8229, p < 0.01). Fraxetin was positively correlated with Verrucomicrobia ( r = 0.7632, p < 0.01) and Actinoallomurus ( r = 0.6477, p < 0.05). It is worth noting that trehalose had a significant positive correlation with most rhizosphere microorganisms. Meanwhile, trehalose showed a highly positive correlation with Mesorhizobium ( r = 0.8326, p < 0.001), Proteobacteria ( r = 0.9106, p < 0.001), Hyphomicrobiales ( r = 0.8335, p < 0.001), and Deltaproteobacteria ( r = 0.8388, p < 0.001). This may explain the significant decrease in trehalose in the rhizosphere following B. velezensis SAAS-63 inoculation. The results showed that the relative abundance of six microorganisms had the strongest correlation with certain metabolites in the rhizosphere, indicating that these microorganisms may be involved in the formation of most of the metabolites in the soil. As an abiotic stress, nutrient stress seriously inhibits plant growth. As beneficial microorganisms, PGPRs have great potential to not only promote plant growth but also to help plants resist abiotic stress (Karnwal et al. ). At present, many studies have reported the ability of PGPR to promote growth and resist abiotic stress. However, most studies have focused on the antioxidant capacity and osmotic regulation of PGPR under drought, salinity, and high temperature stress. The understanding of the changes of rhizosphere microorganisms and metabolites of plant-PGPR interaction under nutrient stress is quite limited. In the post-genome era, metagenomics and metabonomics are considered to be important tools for modern agriculture to reveal complex mechanisms through microorganisms and metabolites. Therefore, this study not only systematically studied the effects of PGPR on the physiology and biochemistry of lettuce but also explored the nutrient stress of PGPR from the gene and molecular level. In this study, inoculation with strain SAAS-63 promoted the growth of lettuce under nutrient-sufficient and nutrient-deficient conditions and improved agronomic parameters. Notably, the growth of seedlings inoculated with strain SAAS-63 under nutrient-deficient conditions was most improved. Some studies have also reported increased growth under abiotic stress following the application of PGPRs (Nigam et al. ). In this study, the root length under nutritional stress was shorter than that under adequate nutrition. The root length increased significantly after inoculation with B. velezensis SAAS-63, which indicated that inoculation with PGPRs promoted plant root development and helped the plants to absorb more surrounding nutrients. Abiotic stress destroys the osmotic balance and induces the excessive production of ROS in plant cells. Enzymatic antioxidant systems and nonenzymatic components play key roles in the induction and elimination of toxic levels of ROS (Liebthal et al. ). In this experiment, the activity of SOD, POD, and CAT in the inoculated plants leaves was significantly higher than that of the non-inoculated plants. This phenomenon showed that the ability of scavenging reactive oxygen species of inoculated plants is higher than that of uninoculated plants under the nutrient-deficient conditions. In this study, the content of Pro increased significantly after inoculation with B. velezensis SAAS-63. Pro plays a variety of key roles, such as stabilizing cell membrane and protein, transmitting cell signals, and regulating gene expression (Muhammad et al. ). As an osmotic regulator, an increase in Pro content helps alleviate stress in plants (Guan et al. ). In addition, the improvement of nutrient stress resistance in the plants via inoculation with B. velezensis SAAS-63 did not cause an increase in the content of all osmotic regulating substances. For example, the content of SS decreased significantly after B. velezensis SAAS-63 inoculation. We speculated that inoculation with PGPRs promoted the decomposition and utilization of SS. In addition, the content of MDA in the plants increased significantly under nutrient deficiency and decreased significantly after inoculation with B. velezensis SAAS-63. MDA is produced by plasma membrane lipid peroxidation (LPO) (Rashid et al. ). As one of the index of LPO, MDA can directly reflect the damage degree of stress to plants (Ahmad et al. ; Hasanuzzaman et al. ). When the MDA content is lower, the damage of MDA to plants is lower. The results indicated that inoculation with B. velezensis SAAS-63 helped to reduce the harm caused by nutrient stress to the plants. Thus, PGPRs can regulate the antioxidant capacity of plants through a series of biochemical reactions (enzyme system and non-enzyme system) to prevent oxidative damage caused by ROS. Plant growth was limited under nutrient deficiency, which was consistent with the significant decrease in the accumulation of nutrients in the plant leaves. A higher concentration of macronutrients and micronutrients in the seedling leaves inoculated with B. velezensis SAAS-63 suggests the involvement of PGPRs in increasing N and mineral uptake (He et al. ; Nguyen et al. ). In addition, the accumulation of N and mineral elements in the roots of seedlings without B. velezensis SAAS-63 inoculation increased significantly, while that of the seedlings inoculated with B. velezensis SAAS-63 decreased significantly. The results showed that PGPR inoculation contributed to the transport of nutrients from the roots to the leaves. However, the role of PGPRs was not obvious under nutrient-sufficient conditions. Combined with the plant morphological index, the root length of the plants was significantly promoted by B. velezensis SAAS-63 inoculation. An increase in root length helps plants to absorb more nutrients. In addition, the positive role of PGPRs may also be related to some substances that promote plant nutrient absorption, such as melatonin and dopamine, which can promote plant mineral absorption under abiotic stress (Li et al. ). The effects of some mineral elements (Zn, Fe, and P) on transcriptional response, nutrient sensing, signal transduction, and transport in plant nutrition regulation also have been noted (Fan et al. ). Therefore, this study confirmed that PGPRs can change the sensing and signaling involved in plant nutrition to promote plant growth under the nutrient-deficient condition and this change is achieved by regulating the content of nutrient elements. With changes in climate, crops are increasingly being subjected to abiotic stresses during cultivation. By settling in the rhizosphere of plants, PGPRs protect plants from many abiotic stresses. In this experiment, inoculation with B. velezensis SAAS-63 under nutrient deficiency significantly changed the diversity of plant rhizosphere microorganisms, which may be due to the interaction between B. velezensis SAAS-63 and native microorganisms to form a unique microbial network. The formation of soil microbial complex community can regulate the cycle of soil nutrients and affect soil properties, promote plant growth, and promote the sustainability of the ecosystem (Zhang et al. ). There were significant differences in multiple analyses between groups inoculated with B. velezensis SAAS-63 and those without inoculation, indicating that inoculation with PGPRs influenced the rhizosphere microbial community structure. Under nutrient deficiency, B. velezensis SAAS-63 inoculation significantly increased the abundance of Streptomyces , Actinoallomurus , Verrucomicrobia , and Chloroflexi . Streptomyces can relieve the pressure caused by the environment and increase the yield of plants. A previous study demonstrated that Actinoallomurus is related to the synthesis of flavonoids and coumarins, which is beneficial to the synthesis of substances resistant to environmental stress (Pozzi et al. ). In the correlation analysis between the rhizosphere microorganisms and metabolites, we also found a significant correlation between C and the coumarin precursor sinapic acid, which shows that Actinoallomurus plays an important role in the production of stress-resistant substances in plants. Verrucomicrobia plays a vital role in agricultural sustainability and the promotion of plant and crop yield by availing nutrients and preparing a conducive environment (Baliyarsingh et al. ). Research shows that Chloroflexi are key bacteria that help improve resistance (Wang et al. ). Therefore, this study showed that the structure of rhizosphere microorganisms was altered by inoculating plants with B. velezensis SAAS-63, with more beneficial bacteria recruited to improve their ability to resist nutritional deficiency. To explore the effect of strain SAAS-63 inoculation on improving plant response to nutritional stress, the metabolites in the rhizosphere were also analyzed. There are two main sources of soil metabolites, one from plant roots and the other from microorganisms. The composition of root exudates varies with plant species and environmental stress (Cheng et al. ). A former study have shown that exogenous inoculants can alter plant metabolism (Su et al. ). Our study found that inoculation with B. velezensis SAAS-63 under nutrient deficiency significantly affected the soil metabolite spectrum, including organic acids and derivatives and lipids and lipid-like molecules. It also interfered with some metabolic pathways including those involved in basic C metabolism and some secondary metabolic pathways. Firstly, the metabolic pathway of phenylpropanoid changed significantly after B. velezensis SAAS-63 inoculation in this study. When the plants were subjected to nutritional stress, the contents of flavonoids and isoflavones in the non-inoculated plants were significantly higher than those in the inoculated plants. Flavonoids and isoflavones are considered important substances for plants to resist abiotic stress (Pozzi et al. ; Trush and Pal'ove-Balang ). Therefore, the decrease of these substances after inoculation with B. velezensis SAAS-63 indicated that B. velezensis SAAS-63 can help plants to resist stress. Secondly, to further explore the mechanism of resistance of B. velezensis SAAS-63 to stress, we analyzed other substances in the phenylpropanoid metabolic pathway and found that the contents of sinapic acid and fraxetin increased significantly after B. velezensis SAAS-63 inoculation. Sinapic acid is the precursor of lignin, and an increase in sinapic acid content helps in plant lignin synthesis. Lignin is a major player for plants to perceive and respond to environmental stress. It can not only provide mechanical strength for plant secondary cell wall but also protect cells from abiotic stress (Jia et al. ). Fraxetin, which is a coumarin, also plays an important role in reducing abiotic stress in plants (Singh et al. ). A former study showed that plants can release coumarins to the rhizosphere under abiotic stress, which is a key means by which plants can obtain iron (Chutia et al. ). As indicated in Fig. , the synthesis of flavonoids and isoflavone decreased, and more phenylpropane flowed to the lignin synthesis pathway and coumarin synthesis pathway. Therefore, the redirected metabolic flux after B. velezensis SAAS-63 inoculation explained the mechanism by which B. velezensis SAAS-63 helps plants to resist nutrient stress. Plants need to constantly adjust the distribution of energy and metabolites to maintain growth and survive under stress (Zhang et al. ). In addition, the primary metabolism of the plants also changed after inoculation with B. velezensis SAAS-63. Carbon metabolism includes starch and sucrose metabolism, glycolysis, and the tricarboxylic acid cycle (Hartman et al. ). Trehalose metabolism is a branch of starch and sucrose metabolism. A former study showed that trehalose plays an important role in plant growth and the stress response and can promote root growth and stomatal closure (Kosar et al. ). When the trehalose content decreased after inoculation with B. velezensis SAAS-63, the abundance of genes encoding trehalose hydrolases in the plant rhizosphere microbial macrogenomes increased, which indicated that B. velezensis SAAS63 inoculation could help plants recruit microorganisms to decompose and utilize trehalose. Research has indicated that starch and sucrose metabolism function are significantly correlated with different bacteria along different metabolite pathways, which may confirm that different kinds of bacteria lead to different metabolisms by using and decomposing starch and sucrose (Song et al. ). Meanwhile, the phenylpropane metabolic pathway is also induced by C metabolism (Zhang et al. ). To sum up, the addition of PGPRs can alter the metabolism of the plant rhizosphere. These results suggested that PGPRs can improve the stress tolerance of plants by affecting primary and secondary metabolism. Soil is a quasi-organism, in which microorganisms undertake most of the metabolic activities of soil. The changes of soil metabolites often depend on the species and abundance of soil microorganisms. Complex microbial communities play a decisive role in the cycle and metabolism of exogenous nutrients in soil. Therefore, it is a meaningful work to analyze and reveal the correlation between soil metabolism and bacterial community. The results of our analysis showed that the changes in several metabolites following B. velezensis SAAS-63 inoculation were closely related to microorganisms. A former study showed that flavonoid derivatives secreted by the roots drive oxalic acid bacteria to become enriched in the rhizosphere, which in turn promotes lateral root development and N uptake in maize in low N soil (Yu et al. ). In this study, the differential metabolites were significantly enriched in the phenylpropane metabolic pathway, which is an important branch of N metabolism. This shows that rhizosphere microorganisms play an essential role in N metabolism. Song et al. showed that the metabolic of starch and sucrose were most closely related to different bacterial members in different metabolites pathways, which may confirm that bacteria regulate starch and sucrose metabolic and lead to different types of metabolism in the rhizosphere. This further confirms that the decrease in trehalose abundance in the starch and sucrose metabolic pathways in this study was due to the decomposition of specific microorganisms in the rhizosphere and the use of trehalose to rebuild rhizosphere C metabolism. Our results show that the changes in several metabolites induced by B. velezensis SAAS-63 inoculation are related to the changes in bacterial and fungal communities. This result further supports the view that soil microorganisms can promote or inhibit the accumulation of soil metabolites. Based on metagenomics and metabolomics, this study comprehensively analyzed the changes in rhizosphere microorganisms and metabolites after inoculation with B. velezensis SAAS-63, identifying key metabolic pathways. However, these results do not fully explain the exact mechanism by which PGPRs help plants to resist nutritional stress. Further work needs to be done to analyze the mechanisms of PGPR resistance to stress. Transcriptomic analysis and dynamic monitoring of the changes in metabolites and related microorganisms are necessary to further reveal the mechanism by which inoculation with PGPRs improves resistance to nutritional stress. Our results confirm the close relationship between soil microorganisms and soil metabolism, which helps us understand the mechanism of tolerance of lettuce to nutrient deficiency induced by PGPRs. This can guide the measures for improving soil quality and crop yield via the addition of PGPRs, thereby providing an important reference for the further study of plant–microorganism–environment interactions. Below is the link to the electronic supplementary material. Supplementary file1 (PDF 638 KB)
Bacterial and fungal community composition and community-level physiological profiles in forest soils
2697347e-8513-4fab-91fb-a26e9db5c59d
10118104
Microbiology[mh]
80%–90% of the processes in soils may be mediated by microorganisms . Therefore, the composition and functioning of the soil microbial communities have significant implications for carbon (C) and nutrient cycling, especially in forest soils rich in organic matter. The composition of plant litter (e.g., the relative proportions of cellulose and lignin) may affect the structure and functioning of microbial communities . In addition, soil properties, such as nutrient concentrations and pH, influence these aspects of the microbial communities [ – ]. However, it remains important to develop an improved understanding of (i) the factors that strongly influence the composition and functioning of microbial communities, (ii) the relationships in potential functioning and community composition between the soil O and A horizons, and (iii) the degree to which microbial community diversity is translated into functional diversity in forest soils. BIOLOG microplate technique and polymerase chain reaction–denaturing gradient gel electrophoresis (PCR-DGGE) analysis of rDNA fragments have often been used to characterize potential functioning and community composition of microbial community, respectively . In response to criticism that community-level physiological profile (CLPP) based on the BIOLOG microplate provide a biased representation of the functional ability of culturable bacteria capable of rapidly growing on substrates in the BIOLOG plate [ – ], Lladó and Baldrian suggested that CLPP was able to evaluate the functional potential of the fast-growing copiotrophic bacteria , which are active or potentially active bacteria that largely contribute to nutrient cycling in soils [ – ]. For fungal CLPPs, the limitation that the BIOLOG plate selects only culturable microorganisms is minor because the majority of fungi are culturable except for obligate symbionts . To date, many studies have supported the utility of CLPP for studying the potential functioning of the microbial community and comparing microbial communities in different samples [ – ]. Although DGGE analysis does not provide direct phylogenetic information and underestimates total microbial diversity, it is a rapid and inexpensive method that quantitatively detects differences in the diversity and composition of microbial communities, which is comparable to those of high-throughput sequencing . Using the BIOLOG microplate technique and PCR-DGGE analysis of 16S and 18S rDNA fragments, we characterized two attributes of the bacterial and fungal communities in the O and A horizons of forest soils, namely their potential functioning and composition. In this study, the following three hypotheses were tested. We hypothesized that microbial community composition and potential functioning will differ distinctly between the O and A horizons because of differences in organic matter composition and pH (Hypothesis 1), but that these microbial characteristics would covary between the O and A horizons owing to the possible linkage in organic matter characteristics and pH between the two horizons at each site. Thus, differences in the O horizon between sites are accompanied by corresponding differences in the A horizon between sites for both community composition and potential functioning (Hypothesis 2). In addition, we hypothesized that potential functioning of the microbial community is associated with the community composition (Hypothesis 3). Soils Litter and soil samples were collected from the O and A horizons, respectively, at each of 12 forest sites (35.81°N–36.34°N, 137.81°E–137.99°E) in Nagano Prefecture, Japan, in November, 2009 ( n = 6 for Andosols; n = 6 for Cambisols). The altitude of the sampling sites ranged from 700 to 2,045 m. Most of the sites were situated in conifer forests (vegetation at most sites was Larix kaempferi , with Pinus densiflora and Cryptomeria japonica at some sites). Because the vegetation was the same at most sites, we did not examine the effect of vegetation on microbial community composition and potential functioning. At each site, samples were collected from five plots, and then were pooled and mixed to form a composite sample and sieved through a 2 mm mesh. The litter and soil samples were stored at −20 °C for DNA extraction, and at 4 °C for BIOLOG and microbial biomass C measurements. A portion of the soil sample was air-dried for chemical analyses, whereas a portion of the litter sample was dried at 70 °C and then ground to powder using a blender (Osaka Chemical WB-1, Osaka, Japan). For pH measurement, the dried litter samples were not ground. The pH was measured from a soil–water suspension (1:2.5, w/v) or a litter–water suspension (1:50, w/v) with a glass electrode . Organic C and total nitrogen (N) contents were determined by dry combustion using an elemental analyzer (Thermo Finnigan Flash EA1112, Waltham, MA, USA) . The ground litter sample was fractionated into water-soluble polysaccharide, hemicellulose, cellulose, lignin, and lipids at Createrra, Inc. (Tokyo, Japan) using the proximate analytical method of Waksman and Stevens with some modifications . Organically bound (Al p and Fe p ) and non-crystalline and organically bound forms (Al o and Fe o ) of aluminum (Al) and iron (Fe) were extracted with 0.1 M sodium pyrophosphate (pH 10) and 0.2 M acid ammonium oxalate (pH 3), respectively . Aluminum and Fe were analyzed using flameless and flame atomic absorption spectrometry (Perkin Elmer 5100 PC, Tokyo, Japan), respectively. All data are expressed on a dry weight basis. All the analyses, including the microbial ones described below, were performed in 2009 and 2010. PCR-DGGE analysis of 16S and 18S rDNA fragments We analyzed the composition of the bacterial and fungal communities in the litter and soil samples by PCR-DGGE . Total soil DNA was extracted from 0.4 g of each sample using the FastDNA SPIN Kit for Soil (MP Biomedicals, Illkirch-Graffenstaden, France) in accordance with the manufacturer’s instructions. Given the difficulty of extracting DNA from the Andosol samples, heat-treated skim milk (treated at 115 °C for 5 min) was added before cell lysis to inhibit DNA adsorption to humic acid and allophane in the soil . However, DNA could not be extracted from two samples from the A horizon of Andosols. Thus, four samples from the A horizon of Andosols were used for the following analysis. The bacterial 16S rDNA fragment for DGGE analysis was amplified by PCR with the primer set 968f-GC and 1378r. After initial denaturation at 94 °C for 2 min, 34 amplification cycles were performed (denaturation at 94 °C for 15 s, annealing at 55 °C for 30 s, and extension at 68 °C for 30 s). The fungal 18S rDNA fragment was amplified with the primer set NS1 and GCFung. After denaturation at 94 °C for 2 min, 30 amplification cycles were performed (denaturation at 94 °C for 15 s, annealing at 50 °C for 30 s, and extension at 68 °C for 30 s). The PCR products were purified using the QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA). The DGGE analysis was performed with a DCode Universal Mutation Detection System (BioRad Laboratories, Hercules, CA, USA). For bacterial community analysis, a 6% polyacrylamide gel with a linear denaturing gradient range of 50%–70% was used to separate the 16S rDNA PCR products (100% denaturant is defined as 7 M urea and 40% [v/v] formamide). Each lane was loaded with 200 ng purified PCR products. The PCR products were separated by electrophoresis at 58 °C and 50 V for 18 h. For fungal community analysis, a 7% polyacrylamide gel with a linear denaturing gradient ranging from 20% to 45% was used, and the running condition was 60°C and 50 V for 20 h. In the bacterial and fungal DGGE analyses, DGGE Markers III and IV (Nippon Gene, Toyama, Japan) were used as the molecular markers, respectively . After electrophoresis, the gels were stained with SYBR Green I Nucleic Acid Gel Stain (Cambrex Bio Science, Rockland, ME, USA) and scanned with a ChemiDoc XRS system (BioRad Laboratories). Gel images were analyzed using Fingerprinting II software (BioRad Laboratories). Community diversity was evaluated using the number of DGGE bands (species richness) and the Shannon–Wiener diversity index ( H′ ). H′ was calculated using the following equation: H′ = −Σ p i (ln p i ), where p i is the proportion of the intensity of each band to the sum of intensity per profile. The proportion of the intensity of each band was also used in the principal coordinate analysis (PCoA). BIOLOG and microbial biomass C measurements CLPPs of bacteria and fungi, based on the BIOLOG ECO plate and SFN2 plate (BIOLOG, Hayward, USA), respectively , were determined to assess the potential functioning and functional diversity . The CLPP was conducted within one week after sampling. For the fungal CLPP, the inoculation solution was supplemented with antibiotics (10 mg L −1 streptomycin sulfate and 5 mg L −1 chlortetracycline) to inhibit bacterial growth. The ECO plates were incubated at 28 °C for 72 h, and absorbances at 595 nm and 750 nm were measured with a microplate reader (Model 680XR, BioRad). After correcting for the absorbances at 595 nm and 750 nm in each well at 0 h and the water well at 72 h, the value for each well used for subsequent analysis was the 595 nm absorbance (color development plus turbidity) minus the 750 nm absorbance (turbidity) at 72 h . The SFN2 plates were incubated for 168 h; the absorbance was measured at 750 nm and corrected for the readings in each well at 0 h and in the water well at 168 h. Well optical density values of less than 0.1 were set to zero. The overall color development in each plate was expressed as the average well color development (AWCD). Potential functional diversity was calculated using the Shannon–Wiener index ( H ′): H ′ = −Σ p i (ln p i ), where p i is the proportion of the absorbance value of the i th substrate to the sum of absorbance values of all substrates in a plate. The proportion of the absorbance value of each substrate was also used in the PCoA. Microbial biomass C in the soils was measured using the chloroform fumigation–extraction method as described previously . Soil was fumigated with ethanol-free chloroform for 24 h at 25°C and then extracted with 0.5 M K 2 SO 4 for 30 min. The organic C content in the extracts was measured with an organic C analyzer (Shimadzu TOC-V, Kyoto, Japan). Soil microbial biomass C (C mic ) was calculated using a conversion factor ( k EC = 0.49) as follows : C mic (μg g −1 ) = E C / k EC , where E C = (amount of C (μg g −1 ) extracted by 0.5 M K 2 SO 4 from fumigated soil)–(amount of C (μg g −1 ) extracted by 0.5 M K 2 SO 4 from non-fumigated soil). Statistical analyses Welch’s t -test was used to detect a significant difference between the means of two samples. Scheffé’s test together with one-way analysis of variance (ANOVA) were used to evaluate significant differences for multiple-group comparisons. Pearson correlation analysis was performed to measure the strength of the associations between variables. These analyses were conducted using BellCurve for Excel (Social Survey Research Information, Tokyo, Japan). The PCoA was performed on the data from the DGGE analysis and CLPP using Bray–Curtis dissimilarity matrices. Procrustes analysis, a method of comparing two sets of configurations, was performed to assess the extent that the CLPP and DGGE data yielded similar results with respect to PCoA ordinations among samples with 999 permutations. The PCoA and Procrustes analyses were conducted with the vegan package in R (version 4.1.2). Litter and soil samples were collected from the O and A horizons, respectively, at each of 12 forest sites (35.81°N–36.34°N, 137.81°E–137.99°E) in Nagano Prefecture, Japan, in November, 2009 ( n = 6 for Andosols; n = 6 for Cambisols). The altitude of the sampling sites ranged from 700 to 2,045 m. Most of the sites were situated in conifer forests (vegetation at most sites was Larix kaempferi , with Pinus densiflora and Cryptomeria japonica at some sites). Because the vegetation was the same at most sites, we did not examine the effect of vegetation on microbial community composition and potential functioning. At each site, samples were collected from five plots, and then were pooled and mixed to form a composite sample and sieved through a 2 mm mesh. The litter and soil samples were stored at −20 °C for DNA extraction, and at 4 °C for BIOLOG and microbial biomass C measurements. A portion of the soil sample was air-dried for chemical analyses, whereas a portion of the litter sample was dried at 70 °C and then ground to powder using a blender (Osaka Chemical WB-1, Osaka, Japan). For pH measurement, the dried litter samples were not ground. The pH was measured from a soil–water suspension (1:2.5, w/v) or a litter–water suspension (1:50, w/v) with a glass electrode . Organic C and total nitrogen (N) contents were determined by dry combustion using an elemental analyzer (Thermo Finnigan Flash EA1112, Waltham, MA, USA) . The ground litter sample was fractionated into water-soluble polysaccharide, hemicellulose, cellulose, lignin, and lipids at Createrra, Inc. (Tokyo, Japan) using the proximate analytical method of Waksman and Stevens with some modifications . Organically bound (Al p and Fe p ) and non-crystalline and organically bound forms (Al o and Fe o ) of aluminum (Al) and iron (Fe) were extracted with 0.1 M sodium pyrophosphate (pH 10) and 0.2 M acid ammonium oxalate (pH 3), respectively . Aluminum and Fe were analyzed using flameless and flame atomic absorption spectrometry (Perkin Elmer 5100 PC, Tokyo, Japan), respectively. All data are expressed on a dry weight basis. All the analyses, including the microbial ones described below, were performed in 2009 and 2010. We analyzed the composition of the bacterial and fungal communities in the litter and soil samples by PCR-DGGE . Total soil DNA was extracted from 0.4 g of each sample using the FastDNA SPIN Kit for Soil (MP Biomedicals, Illkirch-Graffenstaden, France) in accordance with the manufacturer’s instructions. Given the difficulty of extracting DNA from the Andosol samples, heat-treated skim milk (treated at 115 °C for 5 min) was added before cell lysis to inhibit DNA adsorption to humic acid and allophane in the soil . However, DNA could not be extracted from two samples from the A horizon of Andosols. Thus, four samples from the A horizon of Andosols were used for the following analysis. The bacterial 16S rDNA fragment for DGGE analysis was amplified by PCR with the primer set 968f-GC and 1378r. After initial denaturation at 94 °C for 2 min, 34 amplification cycles were performed (denaturation at 94 °C for 15 s, annealing at 55 °C for 30 s, and extension at 68 °C for 30 s). The fungal 18S rDNA fragment was amplified with the primer set NS1 and GCFung. After denaturation at 94 °C for 2 min, 30 amplification cycles were performed (denaturation at 94 °C for 15 s, annealing at 50 °C for 30 s, and extension at 68 °C for 30 s). The PCR products were purified using the QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA). The DGGE analysis was performed with a DCode Universal Mutation Detection System (BioRad Laboratories, Hercules, CA, USA). For bacterial community analysis, a 6% polyacrylamide gel with a linear denaturing gradient range of 50%–70% was used to separate the 16S rDNA PCR products (100% denaturant is defined as 7 M urea and 40% [v/v] formamide). Each lane was loaded with 200 ng purified PCR products. The PCR products were separated by electrophoresis at 58 °C and 50 V for 18 h. For fungal community analysis, a 7% polyacrylamide gel with a linear denaturing gradient ranging from 20% to 45% was used, and the running condition was 60°C and 50 V for 20 h. In the bacterial and fungal DGGE analyses, DGGE Markers III and IV (Nippon Gene, Toyama, Japan) were used as the molecular markers, respectively . After electrophoresis, the gels were stained with SYBR Green I Nucleic Acid Gel Stain (Cambrex Bio Science, Rockland, ME, USA) and scanned with a ChemiDoc XRS system (BioRad Laboratories). Gel images were analyzed using Fingerprinting II software (BioRad Laboratories). Community diversity was evaluated using the number of DGGE bands (species richness) and the Shannon–Wiener diversity index ( H′ ). H′ was calculated using the following equation: H′ = −Σ p i (ln p i ), where p i is the proportion of the intensity of each band to the sum of intensity per profile. The proportion of the intensity of each band was also used in the principal coordinate analysis (PCoA). CLPPs of bacteria and fungi, based on the BIOLOG ECO plate and SFN2 plate (BIOLOG, Hayward, USA), respectively , were determined to assess the potential functioning and functional diversity . The CLPP was conducted within one week after sampling. For the fungal CLPP, the inoculation solution was supplemented with antibiotics (10 mg L −1 streptomycin sulfate and 5 mg L −1 chlortetracycline) to inhibit bacterial growth. The ECO plates were incubated at 28 °C for 72 h, and absorbances at 595 nm and 750 nm were measured with a microplate reader (Model 680XR, BioRad). After correcting for the absorbances at 595 nm and 750 nm in each well at 0 h and the water well at 72 h, the value for each well used for subsequent analysis was the 595 nm absorbance (color development plus turbidity) minus the 750 nm absorbance (turbidity) at 72 h . The SFN2 plates were incubated for 168 h; the absorbance was measured at 750 nm and corrected for the readings in each well at 0 h and in the water well at 168 h. Well optical density values of less than 0.1 were set to zero. The overall color development in each plate was expressed as the average well color development (AWCD). Potential functional diversity was calculated using the Shannon–Wiener index ( H ′): H ′ = −Σ p i (ln p i ), where p i is the proportion of the absorbance value of the i th substrate to the sum of absorbance values of all substrates in a plate. The proportion of the absorbance value of each substrate was also used in the PCoA. Microbial biomass C in the soils was measured using the chloroform fumigation–extraction method as described previously . Soil was fumigated with ethanol-free chloroform for 24 h at 25°C and then extracted with 0.5 M K 2 SO 4 for 30 min. The organic C content in the extracts was measured with an organic C analyzer (Shimadzu TOC-V, Kyoto, Japan). Soil microbial biomass C (C mic ) was calculated using a conversion factor ( k EC = 0.49) as follows : C mic (μg g −1 ) = E C / k EC , where E C = (amount of C (μg g −1 ) extracted by 0.5 M K 2 SO 4 from fumigated soil)–(amount of C (μg g −1 ) extracted by 0.5 M K 2 SO 4 from non-fumigated soil). Welch’s t -test was used to detect a significant difference between the means of two samples. Scheffé’s test together with one-way analysis of variance (ANOVA) were used to evaluate significant differences for multiple-group comparisons. Pearson correlation analysis was performed to measure the strength of the associations between variables. These analyses were conducted using BellCurve for Excel (Social Survey Research Information, Tokyo, Japan). The PCoA was performed on the data from the DGGE analysis and CLPP using Bray–Curtis dissimilarity matrices. Procrustes analysis, a method of comparing two sets of configurations, was performed to assess the extent that the CLPP and DGGE data yielded similar results with respect to PCoA ordinations among samples with 999 permutations. The PCoA and Procrustes analyses were conducted with the vegan package in R (version 4.1.2). DGGE analysis of bacterial and fungal communities Bacterial richness (i.e., the number of bands in a 16S rDNA DGGE fingerprint) in the A horizon was significantly higher in the Andosols than in the Cambisols, but no significant difference was observed in the O horizon between the Andosols and Cambisols ( ). Both soil types exhibited significantly higher bacterial richness in the A horizon than in the O horizon ( p <0.05). Bacterial H ′ was also significantly greater in the A horizon than in the O horizon ( p <0.05), whereas no significant difference was observed between the soil types in each of the O and A horizons. Only a significant negative influence of hemicellulose content on bacterial H ′ in the O horizon ( p <0.05; ) and a significant positive influence of Fe p content on bacterial H ′ in the A horizon ( p <0.05; ) were observed. Contrary to the observations for the bacterial community diversity, no significant differences in fungal community richness and H ′ between soil types and between soil horizons were observed ( ). Altitude had a significant positive influence on the fungal community richness and H ′ in the O horizon ( p <0.05; ). Fungal H ′ was significantly positively correlated with lignin content ( p <0.05) and negatively correlated with water-soluble polysaccharide content ( p <0.05) in the O horizon ( ). No significant influence of soil properties on the fungal community diversity was observed in the A horizon ( ). In the PCoA ordination, the DGGE profiles were clearly divided between the O and A horizons for both bacterial ( ) and fungal ( ) communities. In both horizons, explicit distinction between the soil types for both microbial communities was not observed (Figs , ). Altitude and pH influenced the bacterial DGGE profiles in common with the O and A horizons (Figs and ), whereas a significant influence of C/N ratio on the fungal DGGE profiles was discerned in both horizons (Figs and ). BIOLOG analysis of bacterial and fungal communities The AWCD and metabolic diversity, H ′, of the bacterial community based on a BIOLOG ECO plate analysis exhibited no significant differences between the soil types and between the soil horizons ( ). Organic C ( p <0.05) and water-soluble polysaccharide contents ( p <0.05) negatively affected the bacterial AWCD, whereas lignin content ( p <0.05) was positively correlated with the bacterial AWCD in the O horizon ( ). Water-soluble polysaccharide content negatively affected potential functional diversity, H ′, of the bacterial community in the O horizon ( p <0.05). No significant influence of litter properties on AWCD and potential functional diversity of the fungal community was observed in the O horizon ( ). In the A horizon, no soil properties were significantly correlated with AWCD and metabolic diversity of the bacterial and fungal communities other than a significant negative influence of soil pH on the fungal potential functional diversity ( p <0.05; ). It is noteworthy that microbial biomass C was not significantly correlated with AWCD and metabolic diversity of the bacterial and fungal communities. In the PCoA ordination, the bacterial CLPP tended to be separated between the O and A horizons ( ), but such a trend was not observed for the fungal CLPP ( ). In each of the O and A horizons, no distinct difference was detected between the soil types for both microbial CLPPs (Figs , ). In the O horizon, the same indicators of nutrient status for litter (i.e., C/N ratio, lignin, lipids, total N, and water-soluble polysaccharide contents) significantly affected the CLPP for the bacterial and fungal communities (Figs ). In the A horizon, all soil properties examined did not influence the CLPP for both bacterial and fungal communities other than pH affecting the fungal CLPP ( ). Bacterial richness (i.e., the number of bands in a 16S rDNA DGGE fingerprint) in the A horizon was significantly higher in the Andosols than in the Cambisols, but no significant difference was observed in the O horizon between the Andosols and Cambisols ( ). Both soil types exhibited significantly higher bacterial richness in the A horizon than in the O horizon ( p <0.05). Bacterial H ′ was also significantly greater in the A horizon than in the O horizon ( p <0.05), whereas no significant difference was observed between the soil types in each of the O and A horizons. Only a significant negative influence of hemicellulose content on bacterial H ′ in the O horizon ( p <0.05; ) and a significant positive influence of Fe p content on bacterial H ′ in the A horizon ( p <0.05; ) were observed. Contrary to the observations for the bacterial community diversity, no significant differences in fungal community richness and H ′ between soil types and between soil horizons were observed ( ). Altitude had a significant positive influence on the fungal community richness and H ′ in the O horizon ( p <0.05; ). Fungal H ′ was significantly positively correlated with lignin content ( p <0.05) and negatively correlated with water-soluble polysaccharide content ( p <0.05) in the O horizon ( ). No significant influence of soil properties on the fungal community diversity was observed in the A horizon ( ). In the PCoA ordination, the DGGE profiles were clearly divided between the O and A horizons for both bacterial ( ) and fungal ( ) communities. In both horizons, explicit distinction between the soil types for both microbial communities was not observed (Figs , ). Altitude and pH influenced the bacterial DGGE profiles in common with the O and A horizons (Figs and ), whereas a significant influence of C/N ratio on the fungal DGGE profiles was discerned in both horizons (Figs and ). The AWCD and metabolic diversity, H ′, of the bacterial community based on a BIOLOG ECO plate analysis exhibited no significant differences between the soil types and between the soil horizons ( ). Organic C ( p <0.05) and water-soluble polysaccharide contents ( p <0.05) negatively affected the bacterial AWCD, whereas lignin content ( p <0.05) was positively correlated with the bacterial AWCD in the O horizon ( ). Water-soluble polysaccharide content negatively affected potential functional diversity, H ′, of the bacterial community in the O horizon ( p <0.05). No significant influence of litter properties on AWCD and potential functional diversity of the fungal community was observed in the O horizon ( ). In the A horizon, no soil properties were significantly correlated with AWCD and metabolic diversity of the bacterial and fungal communities other than a significant negative influence of soil pH on the fungal potential functional diversity ( p <0.05; ). It is noteworthy that microbial biomass C was not significantly correlated with AWCD and metabolic diversity of the bacterial and fungal communities. In the PCoA ordination, the bacterial CLPP tended to be separated between the O and A horizons ( ), but such a trend was not observed for the fungal CLPP ( ). In each of the O and A horizons, no distinct difference was detected between the soil types for both microbial CLPPs (Figs , ). In the O horizon, the same indicators of nutrient status for litter (i.e., C/N ratio, lignin, lipids, total N, and water-soluble polysaccharide contents) significantly affected the CLPP for the bacterial and fungal communities (Figs ). In the A horizon, all soil properties examined did not influence the CLPP for both bacterial and fungal communities other than pH affecting the fungal CLPP ( ). Bacterial CLPP, and bacterial and fungal community composition showed distinct differences between the O and A horizons in the PCoA ( ). In addition, the bacterial species richness and H ′ based on the DGGE profile were significantly greater in the A horizon than in the O horizon ( p <0.05; ). These findings supported Hypothesis 1 that microbial community composition and potential functioning differ between the O and A horizons. This might be attributed to the fact that the composition of organic matter, the substrate of microorganisms, largely differs between plant litter in the O horizon (mainly cellulose, lignin, and hemicellulose) and soil organic matter in the A horizon (primarily humic substances) . According to Kirchman, high proportions of cellulose and hemicellulose would select for certain bacteria in the O horizon , which would lower the bacterial species richness and H ′ in the O horizon and differentiate the microbial community composition and potential functioning between the two horizons. However, with regard to the fungal CLPP, no distinct difference was observed between the O and A horizons in the PCoA ( ). We do not have a reasonable explanation for the fungal CLPP result. With respect to the bacterial DGGE profiles in the O and A horizons (Figs and ), the pH, nutrient status (e.g., water-soluble polysaccharide, lignin, organic C, and total N contents, and C/N ratio), and altitude had significant influences. Significant effects of nutrient status and, to a greater extent, pH on bacterial community composition have been reported by several studies [ – , – ]. For the fungal community, the C/N ratio was a common driver that shaped the community composition in both horizons (Figs and ). Lauber et al. and Bao et al. also observed a significant effect of C/N ratio on soil fungal community composition. For the bacterial and fungal CLPPs, several nutrient status indicators had significant influences in the O horizon ( ), but such effects were not observed in the A horizon ( ). Similarly, Klimek et al. reported that the bacterial CLPP was influenced by nutrient status in the O horizon, but such an effect was not observed in the A horizon in temperate forest soils . In contrast, significant effects of nutrient status on the bacterial CLPP were observed in pine forest soil and Mediterranean forest soils . Hence, the contribution of nutrient status to the CLPP is likely to vary between sites, and factor(s) influencing the CLPP might be site-specific. No strong influence of soil type on the DGGE profiles and the CLPP was detected in either the O or A horizons (Figs and ). This result differed from previous observations for arable soils where the soil type was the primary determinant of bacterial community composition . This inconsistency might be ascribed to the greater variation in the litter and soil properties between sites in forests than in arable fields. It should be noted that although mycorrhizal fungi would play important roles in forests, our approach used in this study was not able to specify their composition and potential functioning. We used Procrustes analysis to explore the link between the O and A horizons for the DGGE profiles and for the CLPPs, but observed no significant correlations between the horizons ( ). This observation did not substantiate Hypothesis 2 that differences in the O horizon between sites are accompanied by corresponding differences in the A horizon between sites for both community composition and potential functioning. The present results suggested that different factors had considerable influences on microbial communities between the O and A horizons. For example, for the fungal DGGE profiles, significant influences of total N, organic C, and pH were apparent only in the O horizon (Figs and ). Interestingly, Procrustes analysis revealed significant couplings between the bacterial and fungal communities for both community composition and potential functioning, i.e., bacterial DGGE profile−fungal DGGE profile ( p <0.05 for O horizon; p <0.01 for A horizon) and bacterial CLPP−fungal CLPP ( p = 0.001 for O horizon; p <0.01 for A horizon) in each of the O and A horizons ( ). In addition, the potential functional diversity (i.e., H ′ based on the BIOLOG results) of the bacterial and fungal communities was significantly correlated in each horizon ( p <0.001 for O horizon and p <0.01 for A horizon; Tables and ). Coelho et al. reported a significant congruence between bacterial and microeukaryotic PCoA ordinations based on pyrosequencing data for 16S/18S rDNA in several sediment samples . Singh et al. reported a significant correlation between bacterial and fungal terminal restriction fragment length polymorphisms in a Procrustes analysis of grassland soils . A possible explanation for the present result is that common factors had significant influences on the profiles of both bacterial and fungal communities: pH in the bacterial and fungal DGGE profiles ( ) and total N, C/N ratio, lignin, and lipids in the bacterial and fungal CLPP ( ) in the O horizon, and C/N ratio in the bacterial and fungal DGGE profiles ( ) in the A horizon (no common factors were detected for the bacterial and fungal CLPPs in the A horizon). It should be noted that there will be non-considered factors that explain microbial community composition and potential functioning, especially for the A horizon, in the present study because, contrary to the results for the O horizon ( ), most of the examined factors had no significant influence on the DGGE and CLPP except for the bacterial DGGE profile in the A horizon ( ). Although a significant coupling was observed between the bacterial community composition and potential functioning (i.e., bacterial DGGE profile−bacterial CLPP) in the A horizon ( p <0.01; ), such a result was not obtained for the fungal community (i.e., fungal DGGE profile−fungal CLPP) in the A horizon and for both bacterial and fungal communities in the O horizon ( ). In addition, the genetic diversity (i.e., species richness and H ′ based on the DGGE profile) were not significantly correlated to the potential functional diversity ( H ′ based on the BIOLOG results) for bacterial and fungal communities in the O and A horizons (Tables and ). These results did not support Hypothesis 3 that potential functioning of the microbial community is associated with the community composition. The present observations would be affected, probably to a substantial degree, by the fact that the BIOLOG results reflect only rapidly growing microorganisms, whereas cultivation-independent DGGE analysis reflects the entire microbial community in the soil. However, in a previous cluster analysis, bacterial DGGE and bacterial CLPP results were similar for tea plantation and forest soils . Furthermore, strong couplings have been reported for bacterial CLPP and phospholipid fatty acid profiles in a montane area , and bacterial CLPP and bacterial community composition in pitcher plant microcosms . Further research is warranted to understand the correspondence between microbial community composition and potential functioning. Both the CLPP and DGGE profile were clearly separated between the O and A horizons in PCoA ordinations except for the fungal CLPP. No significant couplings of the CLPP between the O and A horizons and of the DGGE profile between the two horizons were detected by Procrustes analysis for both bacterial and fungal communities. Also, no significant coupling was observed between the community composition and the potential functioning except for the bacterial community in the A horizon. In addition, the community diversity was not associated with the potential functional diversity for the bacterial and fungal communities. Unexpectedly, significant links between the bacterial and fungal DGGE profiles and between the bacterial and fungal CLPPs were observed in each of the O and A horizons. The present results do not fully unravel the factors that shape the composition and potential functioning of microbial communities in forest soils, especially for the A horizon, and further studies are warranted to elucidate the factors. Nevertheless, the present study clearly showed that different factors have substantial influences on microbial communities between the O and A horizons but that common factors affect both the bacterial and fungal communities in each horizon. These results have important implications for C and nutrient cycling in forest soils rich in organic matter.
Integrated metabolomics and transcriptomics analysis reveals the potential mechanism by which Methyl jasmonate enhances the pungent flavor of soilless-cultivated Chinese chives (
a3904d12-62b4-4ee8-a7e9-d80e06aab24d
11931784
Biochemistry[mh]
The genus Allium includes some of the oldest cultivated vegetables, which are prized for their ornamental qualities and use as spices and herbs . Thiosulfates and reactive S-alk(en)yl cysteine sulfoxides (CSOs) give Chinese chives ( Allium tuberosum ) their characteristic aroma . CSO biosynthesis in Allium sativum starts with sulfate assimilation, which involves high-affinity sulfate transport proteins (SULTRs) that transport sulfite. This product is converted to sulfide by ATP-sulfatases, adenosine-5’-phosphate sulfate reductase (APR), and sulfite reductase (SiR), leading to cysteine biosynthesis via O-acetylserine sulfhydrylase (OASTL) and glutathione formation through glutamate-cysteine ligase (GCL) and glutathione synthase (GS) . Glutathione, a crucial organic sulfur compound in plants, initiates CSO biosynthesis through γ-glutamyl transpeptidase (GGT) and flavin-containing monooxygenase (FMO), which are involved in alliin synthesis . Onions primarily contain isoalliin as their major CSO, while garlic contains alliin as its major CSO, with methiin serving as a secondary CSO in both species . Despite the culinary prevalence of garlic and onion, research on the nutritional and flavor properties of Chinese chives, especially with respect to the biosynthesis and regulation of flavor compounds, is limited. Furthermore, in garlic, only γ-glutamyl transpeptidase and S -oxygenase involved in alliin biosynthesis have been identified . Therefore, it remains challenging to generate plants with improved CSO levels or microbial systems for CSO synthesis through transgenic or gene-editing technologies . Traditional soil-cultivated Chinese chives from Bradysia odoriphaga Yang et Zhang are vulnerable to safety and quality issues. Hydroponics offers a safer production method for Chinese chives, but the milder flavor of hydroponically grown chives poses a market challenge. The reduced pungency of hydroponic Chinese chives results from insufficient CSO accumulation, and FMO catalyzes sulfoxidation, the rate-limiting step in CSO synthesis . Liu et al. reported that salt stress activates the transcription of the key enzyme-encoding gene AtuFMO1 , increasing CSO accumulation . Yoshimoto et al. (2022) reported that CSO accumulation in Chinese chive callus tissue and the role of alliinase in CSO degradation . CSO biosynthesis involves metabolic pathways involving valine, serine, cysteine, and glutamic acid . The genes related to CSO synthesis identified in some Allium vegetables include plant cysteine synthase (PCS), GGT, and FMO . However, research on their roles and regulation in Chinese chives is limited. Investigating the molecular mechanisms of CSO biosynthesis could improve the nutritional, flavor, and medicinal qualities of Allium vegetables, including Chinese chives. Jasmonic acid (JA) signaling is crucial for regulating numerous physiological processes, including the production of defense metabolites and protein hydrolytic enzymes and the release of volatile organic compounds . α-Linolenic acid is sequentially converted to 12-oxo-phytodienoic acid (OPDA) by lipoxygenase (LOX), oxyalkylene synthase (AOS), and oxyalkylene cyclase (AOC) in the plastid. Subsequently, 12-oxo-phytodienoic acid reductase (OPR) in the peroxisome transforms it into JA, which is further converted into methyl jasmonate (MeJA) by jasmonate o-methyltransferase (JMT) . Active jasmonyl-isoleucine (JA-Ile) is recognized by the receptor COI1, triggering the degradation of the JAZ repressor and releasing the MYC2 transcription factor, activating the JA signal . MeJA can consistently induce secondary metabolite production across the plant kingdom and has been approved as a safe compound for use in all food products prior to harvest . The application of MeJA considerably affects volatile compound biochemical pathways, enhancing the aroma of certain fruits and vegetables . Wang et al. reported that MeJA application increased the levels of sulfur-containing volatile compounds in hydroponically grown Chinese chives, increasing their pungency . However, the metabolic regulatory network and mechanisms underlying the effects of exogenous MeJA on flavor quality and the biosynthesis of flavor precursor compounds in Chinese chives remain unclear. Chinese chive is a tetraploid species with a 15 G per 1 C nuclear genome, slightly smaller than onion but 30 and 100 times larger than rice and Arabidopsis thaliana , respectively . Compared to other vegetables like solanaceous crops, its genomic resources, including molecular markers and functional genes, remain limited . Currently, many non-model plants, including Chinese chive, lack complete genomic data, preventing the establishment of reliable reference genomes . Reference transcriptomes can be constructed through de novo assembly of RNA sequencing (RNA-Seq) data . RNA-Seq, a promising application of next-generation sequencing (NGS) , is a cost-effective and widely utilized strategy that has been successfully applied to analyze entire transcriptomes for genome-wide quantification of transcripts, identification of differentially expressed genes, development of molecular markers, and transcript annotation [ – ]. Analysis of transcript and metabolite profiles from elicitor-treated plants is effective for elucidating gene functions in secondary metabolite biosynthesis . The molecular biology of special-flavor vegetables, including Chinese chives, has been largely neglected, resulting in limited data on the mechanisms underlying flavor quality. The identified volatile secondary metabolites contribute to flavor quality; however, the lack of transcriptional data restricts the use of biotechnological tools for crop improvement and understanding of flavor metabolic pathways. We used a multiomics approach integrating plant metabolomics with transcriptomics to understand the MeJA-mediated biosynthesis of flavor compounds, particularly gene expression changes, in Chinese chives. This study investigated the effects of exogenous MeJA on flavor formation in Chinese chives, with a focus on the key metabolic pathways involved in flavor compound biosynthesis. The impact of MeJA on metabolite accumulation and the expression of flavor-related genes in hydroponically cultivated Chinese chives was examined to elucidate the regulatory mechanisms of MeJA-induced CSO biosynthesis, providing insights into flavor formation and enhancement in hydroponically grown Chinese chives. Chinese chive plants and MeJA priming The Chinese chive variety “Chive God F1,” cultivated by the Seed and Seedling Research Institute of Fugou County, Henan Province, was used as the experimental material. This variety is characterized by its strong disease resistance, cold tolerance. Seedlings were raised in the core demonstration area for Chinese chives in Wushan, China (N 34°250´-34°570´, E 104°340´-105°080´). One-year-old Chinese chives seedlings were transplanted into the substrate cultivation and hydroponic system in a glass greenhouse (20 ± 3 °C/15 ± 3 °C (day/night); relative humidity, 60-70%) at Gansu Agricultural University, Lanzhou, Gansu Province, P. R. China. The hydroponic system was prepared via an 11-well rectangular hydroponic tank, with a nutrient mixture prepared according to the methodology described by Wu et al. (2008) . The substrate cultivation process involved mixing the cultivation substrate, perlite, and vermiculite in a 3:1:1 volume ratio within a 4 L plastic pot. A total of 352 plants were transferred to the hydroponic and substrate cultivation systems. A randomized design with three replicates was used in this study. The transplantation methods and nutrient management techniques employed in the substrate culture and hydroponic systems have been previously described . At 21 days, Chinese chive seedlings, which had grown to 15–20 cm, were treated with MeJA (500 µM) via foliar spraying and applied evenly to the leaves every morning between 7:00 and 8:00 for seven consecutive days. The concentrations of MeJA were based on the concentration adopted by Wang et al. (2022) . The S-MeJA (substrate cultivation) and H-MeJA (hydroponic cultivation) Chinese chives were treated with 150 mL of the specified MeJA solution daily, whereas the control groups (S-CK: substrate cultivation and H-CK: hydroponic cultivation) were sprayed with 150 mL of ultrapure water. The Chinese chives exhibited complete growth and reached commercial standards within a total growth cycle of 35 days. All treated chives were harvested, immediately frozen in liquid nitrogen, and stored at -80 °C for subsequent analysis. Each sample was obtained from at least 50 plants, and the resulting data were averaged from three independent biological replicates. Growth and nutritional quality analysis The harvested samples were dried at 105 °C for 30 min, followed by 75 °C until constant weight, and dry matter content was recorded. Nitrate content was determined according to Cataldo et al. (1975) . Vitamin C, soluble protein, and soluble sugar contents were measured using 2,6-dichlorophenolindophenol dye, Coomassie Brilliant Blue, and anthrone colorimetry [ – ], respectively. Total phenolics and flavonoids were quantified using Folin-Ciocalteu and sodium nitrite-aluminum nitrate assays . Ultra-performance liquid chromatography tandem mass spectrometry (UPLC‒MS/MS) analysis of nonvolatile components Chinese chive leaves were freeze-dried via a vacuum freeze-dryer (SCHENTZ-100 F) and ground in a mixer mill (MM 400, Retsch) at 30 Hz for 1.5 min. A 100 mg sample of the powder was dissolved in 1.2 mL of 70% methanol, agitated every 30 min for three hours, and stored at 4 °C overnight. The extract was subsequently filtered by centrifugation at 12,298 × g for 10 min. Chinese chive leaf extracts were subjected to analysis via an ultra-performance liquid chromatography (UPLC) system (SHIMADZU NexeraX2), which employs a SHIMADZU NexeraX2 apparatus. Metabolomic analysis, including the identification and quantification of metabolites , was performed by Wuhan MetWare Biotechnology Co., Ltd. (Wuhan, China) according to standard protocols. The analytical conditions included the use of an Agilent SB-C18 column (1.8 μm, 2.1 mm × 100 mm) and a mobile phase comprising 0.1% formic acid in water (solvent A) and acetonitrile (solvent B). The gradient commenced at 95% A and 5% B, transitioned to 5% A and 95% B over 9 min, and subsequently returned to 95% A and 5% B for a duration of 2.9 min. The flow rate was set to 0.35 mL/min, the column temperature was maintained at 40 °C, and the injection volume was 4 µL. The effluent was then connected to an electrospray ionization (ESI) triple-quadrupole linear ion trap (QTRAP) mass spectrometry (MS) system. Mass spectrometry was performed on an AB4500 QTRAP UPLC/MS/MS system (AB Sciex) equipped with an ESI Turbo ion spray interface, controlled by Analyst 1.6.3 software in positive and negative ion modes. ESI parameters included a turbo spray ion source, 550 °C source temperature, and ion spray voltage of 5500 V (positive)/-4500 V (negative). Gas settings were 50, 60, and 25.0 psi for GSI, GSII, and CUR, respectively, with high collision energy. The instrument was calibrated in QQQ and LIT modes using 10 and 100 µmol/L polypropylene glycol solutions. QQQ scans in MRM mode used nitrogen as the collision gas, with optimized DP and CE values for specific metabolites during each elution period. Headspace solid-phase microextraction gas chromatography–mass spectrometry (HS-SPME-GC‒MS) analysis of volatile components In accordance with the methods of Wang et al. (2023) , HS-SPME-GC‒MS was used to detect volatile components in the substrate-cultivated and hydroponic Chinese chive leaves. Metabolic data analysis methods The LC-MS platform, using Analyst 1.6.3 software, processed the mass spectrometry data, where the peak area (Area) of each chromatographic peak represented the relative abundance of the corresponding compound. The GC-MS platform processed the raw mass spectrometry data using Qualitative Analysis Workflows B.08.00 software for qualitative analysis, where the peak area of each metabolite was normalized to that of the internal standard ([2H3]-beta-Ionone). Metabolomic data were analyzed using R ( http://www.r-project.org ) to perform principal component analysis (PCA), hierarchical clustering analysis (HCA), Pearson’s correlation coefficient (PCC), and orthogonal partial least squares discriminant analysis (OPLS-DA) on data from all samples. The metabolite signal intensities were normalized and represented via color spectra. Before OPLS-DA, the data underwent log transformation (Log 2 ) and mean centering. The variable importance in projection (VIP) values were extracted from OPLS-DA result. Metabolites with log 2 FC (fold change) ≥ 1, and VIP ≥ 1 were considered significantly regulated between groups. The identified metabolites were annotated via the Kyoto Encyclopedia of Genes and Genomes (KEGG, http://www.kegg.jp/kegg/compound/ ) and subsequently mapped to KEGG metabolic pathways ( http://www.kegg.jp/kegg/pathway.html ). The analysis of significantly regulated metabolites was conducted via metabolite pooling enrichment analysis (MSEA), with the level of significance determined through the application of hypergeometric tests. RNA extraction, detection, library construction, and sequencing Total RNA was extracted from the samples using the TaKaRa Mini BEST Plant RNA Extraction Kit (TaKaRa, Beijing, China) according to the manufacturer’s instructions. The degradation and contamination of RNA were monitored via 1% agarose gel electrophoresis. The RNA purity was evaluated via a NanoPhotometer ® spectrophotometer (IMPLEN, CA, USA), and the RNA concentration was determined via a Qubit ® RNA Assay Kit and a Qubit ® 2.0 Fluorometer (Life Technologies, CA, USA). RNA integrity was evaluated via an RNA Nano 6000 Assay Kit on a Bioanalyzer 2100 system (Agilent Technologies). A total of 1 µg of RNA was used to construct libraries with the NEBNext ® UltraTM RNA Library Prep Kit (Illumina). Approximately 200 base pairs of cDNA were subjected to screening with AMPure XP beads, PCR amplification, and purification to obtain the final library. The libraries were pooled in accordance with the requisite effective concentration and sequencing volume specifications for Illumina sequencing, yielding 150 bp paired-end reads. The raw data were subjected to filtration via Fastp v0.19.3 to remove reads identified as containing adapters. All subsequent analyses were based on the resulting set of clean reads. Clean reads were assembled via Trinity v2.11.0, and the transcripts were clustered and nonredundant via Corset ( https://github.com/trinityrnaseq/trinityrnaseq ). Gene expression levels were calculated using Fragments Per Kilobase of transcript per Million fragments mapped (FPKM). Transcriptome analysis Differential expression analysis was performed using DESeq2 with unnormalized raw reads count data to identify differentially expressed genes (DEGs) in each sample. The Benjamini-Hochberg method was applied to correct the hypothesis testing probabilities (p-values) for multiple hypothesis testing, resulting in a False Discovery Rate (FDR). The criteria for selecting differentially expressed genes were|log 2 Fold Change| ≥ 1 and FDR < 0.05. The NR (Non-Redundant Protein Sequence Database, NCBI), NT (Non-Redundant Nucleotide Database, NCBI), Pfam (Protein Family Database), KOG/COG (Clusters of Orthologous Groups of Proteins), Swiss-Prot (UniProtKB/Swiss-Prot: manually annotated and curated protein sequences), and GO (Gene Ontology Database) were used for gene functional annotation and pathway analysis. Real-time quantitative PCR analysis Total RNA was extracted from leaves via an RNA extraction kit (Acycrate Biotechnology Co., Ltd., China). A 2 µL sample was used for cDNA synthesis with the Evo M-MLV reverse transcription kit. Quantitative RT‒PCR was conducted via the SYBR Green Kit on a LightCycler ® 480 II with the following amplification conditions: 95 °C for 15 min, 95 °C for 10 s, and 60 °C for 30 s over 40 cycles. Each sample was tested in triplicate, and the relative gene expression levels were calculated via the 2 −∆∆CT method. The analysis was conducted via the statistical software package SPSS 20.0, and bar charts were generated via Origin Pro (2021). The housekeeping gene DN253_c0_g1 was used as the internal reference standard (Supplemental Table ). Statistical analyses Data analysis was conducted using SPSS 21.0 (IBM, Armonk, NY, USA). Significant differences ( p < 0.05) were determined via ANOVA followed by Duncan’s multiple range test. Values are presented as the mean ± standard error of three biological replicates. Linear regression analysis was performed using Origin Pro 2021 ( p < 0.05, p < 0.01). Figures were created using Origin Pro 2021 and Adobe Illustrator 2020. Gene expression was log 2 -transformed using FPKM values; when FPKM values were zero, FPKM + 1 was used for log 2 transformation. The relative abundance of metabolites was represented as log 2 values of chromatographic peak areas. Metabolite level and gene expression level FPKM values were Z-score normalized using the MetWare cloud platform ( https://cloud.metware.cn ) to generate heatmaps. Metabolite-level circos heatmap were processed and visualized via the Chiplot cloud platform ( https://www.chiplot.online/ ), employing complete linkage for method selection, correlation for distance calculation, and Z score normalization. The Chinese chive variety “Chive God F1,” cultivated by the Seed and Seedling Research Institute of Fugou County, Henan Province, was used as the experimental material. This variety is characterized by its strong disease resistance, cold tolerance. Seedlings were raised in the core demonstration area for Chinese chives in Wushan, China (N 34°250´-34°570´, E 104°340´-105°080´). One-year-old Chinese chives seedlings were transplanted into the substrate cultivation and hydroponic system in a glass greenhouse (20 ± 3 °C/15 ± 3 °C (day/night); relative humidity, 60-70%) at Gansu Agricultural University, Lanzhou, Gansu Province, P. R. China. The hydroponic system was prepared via an 11-well rectangular hydroponic tank, with a nutrient mixture prepared according to the methodology described by Wu et al. (2008) . The substrate cultivation process involved mixing the cultivation substrate, perlite, and vermiculite in a 3:1:1 volume ratio within a 4 L plastic pot. A total of 352 plants were transferred to the hydroponic and substrate cultivation systems. A randomized design with three replicates was used in this study. The transplantation methods and nutrient management techniques employed in the substrate culture and hydroponic systems have been previously described . At 21 days, Chinese chive seedlings, which had grown to 15–20 cm, were treated with MeJA (500 µM) via foliar spraying and applied evenly to the leaves every morning between 7:00 and 8:00 for seven consecutive days. The concentrations of MeJA were based on the concentration adopted by Wang et al. (2022) . The S-MeJA (substrate cultivation) and H-MeJA (hydroponic cultivation) Chinese chives were treated with 150 mL of the specified MeJA solution daily, whereas the control groups (S-CK: substrate cultivation and H-CK: hydroponic cultivation) were sprayed with 150 mL of ultrapure water. The Chinese chives exhibited complete growth and reached commercial standards within a total growth cycle of 35 days. All treated chives were harvested, immediately frozen in liquid nitrogen, and stored at -80 °C for subsequent analysis. Each sample was obtained from at least 50 plants, and the resulting data were averaged from three independent biological replicates. The harvested samples were dried at 105 °C for 30 min, followed by 75 °C until constant weight, and dry matter content was recorded. Nitrate content was determined according to Cataldo et al. (1975) . Vitamin C, soluble protein, and soluble sugar contents were measured using 2,6-dichlorophenolindophenol dye, Coomassie Brilliant Blue, and anthrone colorimetry [ – ], respectively. Total phenolics and flavonoids were quantified using Folin-Ciocalteu and sodium nitrite-aluminum nitrate assays . Chinese chive leaves were freeze-dried via a vacuum freeze-dryer (SCHENTZ-100 F) and ground in a mixer mill (MM 400, Retsch) at 30 Hz for 1.5 min. A 100 mg sample of the powder was dissolved in 1.2 mL of 70% methanol, agitated every 30 min for three hours, and stored at 4 °C overnight. The extract was subsequently filtered by centrifugation at 12,298 × g for 10 min. Chinese chive leaf extracts were subjected to analysis via an ultra-performance liquid chromatography (UPLC) system (SHIMADZU NexeraX2), which employs a SHIMADZU NexeraX2 apparatus. Metabolomic analysis, including the identification and quantification of metabolites , was performed by Wuhan MetWare Biotechnology Co., Ltd. (Wuhan, China) according to standard protocols. The analytical conditions included the use of an Agilent SB-C18 column (1.8 μm, 2.1 mm × 100 mm) and a mobile phase comprising 0.1% formic acid in water (solvent A) and acetonitrile (solvent B). The gradient commenced at 95% A and 5% B, transitioned to 5% A and 95% B over 9 min, and subsequently returned to 95% A and 5% B for a duration of 2.9 min. The flow rate was set to 0.35 mL/min, the column temperature was maintained at 40 °C, and the injection volume was 4 µL. The effluent was then connected to an electrospray ionization (ESI) triple-quadrupole linear ion trap (QTRAP) mass spectrometry (MS) system. Mass spectrometry was performed on an AB4500 QTRAP UPLC/MS/MS system (AB Sciex) equipped with an ESI Turbo ion spray interface, controlled by Analyst 1.6.3 software in positive and negative ion modes. ESI parameters included a turbo spray ion source, 550 °C source temperature, and ion spray voltage of 5500 V (positive)/-4500 V (negative). Gas settings were 50, 60, and 25.0 psi for GSI, GSII, and CUR, respectively, with high collision energy. The instrument was calibrated in QQQ and LIT modes using 10 and 100 µmol/L polypropylene glycol solutions. QQQ scans in MRM mode used nitrogen as the collision gas, with optimized DP and CE values for specific metabolites during each elution period. In accordance with the methods of Wang et al. (2023) , HS-SPME-GC‒MS was used to detect volatile components in the substrate-cultivated and hydroponic Chinese chive leaves. The LC-MS platform, using Analyst 1.6.3 software, processed the mass spectrometry data, where the peak area (Area) of each chromatographic peak represented the relative abundance of the corresponding compound. The GC-MS platform processed the raw mass spectrometry data using Qualitative Analysis Workflows B.08.00 software for qualitative analysis, where the peak area of each metabolite was normalized to that of the internal standard ([2H3]-beta-Ionone). Metabolomic data were analyzed using R ( http://www.r-project.org ) to perform principal component analysis (PCA), hierarchical clustering analysis (HCA), Pearson’s correlation coefficient (PCC), and orthogonal partial least squares discriminant analysis (OPLS-DA) on data from all samples. The metabolite signal intensities were normalized and represented via color spectra. Before OPLS-DA, the data underwent log transformation (Log 2 ) and mean centering. The variable importance in projection (VIP) values were extracted from OPLS-DA result. Metabolites with log 2 FC (fold change) ≥ 1, and VIP ≥ 1 were considered significantly regulated between groups. The identified metabolites were annotated via the Kyoto Encyclopedia of Genes and Genomes (KEGG, http://www.kegg.jp/kegg/compound/ ) and subsequently mapped to KEGG metabolic pathways ( http://www.kegg.jp/kegg/pathway.html ). The analysis of significantly regulated metabolites was conducted via metabolite pooling enrichment analysis (MSEA), with the level of significance determined through the application of hypergeometric tests. Total RNA was extracted from the samples using the TaKaRa Mini BEST Plant RNA Extraction Kit (TaKaRa, Beijing, China) according to the manufacturer’s instructions. The degradation and contamination of RNA were monitored via 1% agarose gel electrophoresis. The RNA purity was evaluated via a NanoPhotometer ® spectrophotometer (IMPLEN, CA, USA), and the RNA concentration was determined via a Qubit ® RNA Assay Kit and a Qubit ® 2.0 Fluorometer (Life Technologies, CA, USA). RNA integrity was evaluated via an RNA Nano 6000 Assay Kit on a Bioanalyzer 2100 system (Agilent Technologies). A total of 1 µg of RNA was used to construct libraries with the NEBNext ® UltraTM RNA Library Prep Kit (Illumina). Approximately 200 base pairs of cDNA were subjected to screening with AMPure XP beads, PCR amplification, and purification to obtain the final library. The libraries were pooled in accordance with the requisite effective concentration and sequencing volume specifications for Illumina sequencing, yielding 150 bp paired-end reads. The raw data were subjected to filtration via Fastp v0.19.3 to remove reads identified as containing adapters. All subsequent analyses were based on the resulting set of clean reads. Clean reads were assembled via Trinity v2.11.0, and the transcripts were clustered and nonredundant via Corset ( https://github.com/trinityrnaseq/trinityrnaseq ). Gene expression levels were calculated using Fragments Per Kilobase of transcript per Million fragments mapped (FPKM). Differential expression analysis was performed using DESeq2 with unnormalized raw reads count data to identify differentially expressed genes (DEGs) in each sample. The Benjamini-Hochberg method was applied to correct the hypothesis testing probabilities (p-values) for multiple hypothesis testing, resulting in a False Discovery Rate (FDR). The criteria for selecting differentially expressed genes were|log 2 Fold Change| ≥ 1 and FDR < 0.05. The NR (Non-Redundant Protein Sequence Database, NCBI), NT (Non-Redundant Nucleotide Database, NCBI), Pfam (Protein Family Database), KOG/COG (Clusters of Orthologous Groups of Proteins), Swiss-Prot (UniProtKB/Swiss-Prot: manually annotated and curated protein sequences), and GO (Gene Ontology Database) were used for gene functional annotation and pathway analysis. Total RNA was extracted from leaves via an RNA extraction kit (Acycrate Biotechnology Co., Ltd., China). A 2 µL sample was used for cDNA synthesis with the Evo M-MLV reverse transcription kit. Quantitative RT‒PCR was conducted via the SYBR Green Kit on a LightCycler ® 480 II with the following amplification conditions: 95 °C for 15 min, 95 °C for 10 s, and 60 °C for 30 s over 40 cycles. Each sample was tested in triplicate, and the relative gene expression levels were calculated via the 2 −∆∆CT method. The analysis was conducted via the statistical software package SPSS 20.0, and bar charts were generated via Origin Pro (2021). The housekeeping gene DN253_c0_g1 was used as the internal reference standard (Supplemental Table ). Data analysis was conducted using SPSS 21.0 (IBM, Armonk, NY, USA). Significant differences ( p < 0.05) were determined via ANOVA followed by Duncan’s multiple range test. Values are presented as the mean ± standard error of three biological replicates. Linear regression analysis was performed using Origin Pro 2021 ( p < 0.05, p < 0.01). Figures were created using Origin Pro 2021 and Adobe Illustrator 2020. Gene expression was log 2 -transformed using FPKM values; when FPKM values were zero, FPKM + 1 was used for log 2 transformation. The relative abundance of metabolites was represented as log 2 values of chromatographic peak areas. Metabolite level and gene expression level FPKM values were Z-score normalized using the MetWare cloud platform ( https://cloud.metware.cn ) to generate heatmaps. Metabolite-level circos heatmap were processed and visualized via the Chiplot cloud platform ( https://www.chiplot.online/ ), employing complete linkage for method selection, correlation for distance calculation, and Z score normalization. Growth and nutritional evaluation The growth and nutritional quality of MeJA-treated Chinese chives in substrate and hydroponic media were assessed (Table ). Exogenous MeJA application significantly increased dry matter, leaf vitamin C, and flavonoid contents while reducing leaf nitrate levels in soilless-grown Chinese chives. In addition, MeJA improved the soluble sugar and total phenol contents in the leaves of both hydroponically and substrate-grown Chinese chives. Metabolomic analysis To examine metabolite accumulation changes in MeJA-treated substrates and hydroponic Chinese chives, 923 metabolites were detected via UPLC‒MS/MS and GC‒MS. These metabolites were classified into 23 categories (Fig. A). UPLC‒MS/MS identified 737 metabolites, including 19 terpenoids, 20 lignans, coumarins, 51 nucleotides and derivatives, 64 alkaloids, 68 flavonoids, 71 organic acids, 84 amino acids and derivatives, 98 others, 122 phenolic acids, and 140 lipids. GC‒MS identified 186 metabolites, including one ether, one acid, three halogenated hydrocarbons, three other compounds, six terpenoids, seven amines, eight phenols, and 11 alcohols. In addition, 11 aldehydes, 12 hydrocarbons, 15 sulfur-containing compounds, 16 ketones, 23 aromatic hydrocarbons, 25 esters, and 44 heterocyclic compounds were identified. A clustering heatmap was used to analyze the accumulation patterns of these 923 metabolites (Supplementary Table ), revealing differences between MeJA-treated hydroponic and substrate-grown Chinese chives. The heatmap shows hierarchical clustering of the different cultivation methods and MeJA treatment groups (Fig. B). The H-CK, H-MeJA, S-CK, and S-MeJA treatments formed four categories, indicating variation in metabolite accumulation. Clustering of biological replicates demonstrated good homogeneity and reliability. PCA was used to reveal the metabolomic features of the samples, enabling the observation of group differences in the PCA plots. The PCA score plot clearly revealed a separation between treatments, suggesting considerable changes in the metabolites of the MeJA-treated hydroponics and substrate-grown Chinese chives (Fig. , C). PC1 explained 26.45% of the variance in the 923 metabolite datasets, emphasizing differences in cultivation methods, whereas PC2 accounted for 16.93% of the variance related to MeJA treatment. Differentially abundant metabolite analysis In the H-CK vs. H-MeJA groups, 45 differentially abundant metabolites were identified (11 upregulated and 34 downregulated); in the S-CK vs. S-MeJA groups, 68 (13 upregulated and 55 downregulated); in the H-CK vs. S-CK groups, 48 (26 upregulated and 22 downregulated); and in the H-MeJA vs. S-CK groups, 80 (27 upregulated and 53 downregulated) were identified (Fig. , D and E) (Supplementary Tables – ). Further refinement via Fold Change, VIP, and p value generated volcano plots (Fig. , A-B). In hydroponically cultivated Chinese chives, 31 differentially abundant metabolites were identified in the H-CK vs. H-MeJA comparison (6 upregulated, 25 downregulated), including significant upregulation of two phenolic acids, two lipids, one alkaloid, and one aldehyde and downregulation of one amino acid, two phenolic acids, one nucleotide, five flavonoids, one alkaloid, one organic acid, and 14 lipids (Fig. A). For substrate-cultured Chinese chives, 45 differential volatile compounds were identified in the S-CK vs. S-MeJA groups (7 upregulated, 38 downregulated), with significant upregulation of one terpene, one amino acid, three lipids, one ester, and one other compound. The downregulated genes included three amino acids, one amine, one alcohol, two aromatic hydrocarbons, two phenolic acids, five sulfur-containing compounds, one nucleotide, three flavonoids, five other classes, one aldehyde, two alkaloids, four organic acids, five heterocyclic compounds, and three esters (Fig. B). Differentially abundant metabolites were cross-referenced with the KEGG database to determine their associated pathways. Enrichment analysis of the annotated results revealed pathways with significant enrichment of the differentially abundant metabolites. In the H-CK and H-MeJA groups, most metabolites were enriched in phenylpropanoid, linoleic acid, folate, and flavonoid biosynthesis (Fig. C). In the S-CK vs. S-MeJA groups, the metabolites were enriched in purine metabolism; phosphatidylinositol metabolism; flavonoid biosynthesis; arginine biosynthesis; arginine and proline metabolism; and amino and nucleotide sugar metabolism (Fig. D). Transcriptome sequencing and splicing of transcripts To obtain transcriptional data for MeJA-treated hydroponic and substrate-grown Chinese chives, 12 RNA-Seq libraries from the H-CK, H-MeJA, S-CK, and S-MeJA samples were sequenced via the Illumina NovaSeq platform. Quality control via Fastp software revealed clean read percentages ranging from 96.4 to 97.7% (Supplementary Table ). The mean Q20 and Q30 values were 97.79% and 93.61%, respectively, indicating high sequencing accuracy. The matching rate ranged from 76 to 80%, ensuring good transcriptome coverage. The transcripts obtained through Trinity splicing served as references for further analysis (Supplementary Table ). A total of 222,884 transcripts were generated, averaging 748 base pairs in length, with N50/N90 sizes of 1,080/325 and a total sequence base number of 166,807,729. Clustering with Corset produced 210,598 unigenes, averaging 748 bp in length, with N50/N90 sizes of 1106/343 and a total base count of 163,842,037. Functional annotation and classification of transcriptome assemblies in Chinese chives Unigenes were annotated via the KEGG, Nr, Trembl, KOG, Gene Ontology (GO), SwissProt, and Pfam databases, with 34.74% (73,169) matching at least one sequence. A total of 210,598 unigenes were annotated (Supplementary Table ). Compared with those in the NR library, the Chinese chive transcript sequences were highly similar to those in asparagus (33,307, 47.55%) and had fewer matches to those in oil palm (4,336, 6.19%), date palm (3,670, 5.24%), and other species (Supplementary Fig. A). A total of 56,546 unigenes were assigned to at least one GO term (Supplementary Fig. B and Supplementary Table ) and categorized into biological processes, cellular components, and molecular functions. The predominant biological processes included “cellular processes” (35,111), “metabolic processes” (31,376), and “stress responses” (14,416). The main cellular components were “cells” (40,925), “cell parts” (40,847), and “organelles” (31,621). The molecular function category included “binding” (33,439), “catalytic activity” (31,879), and “transporter protein activity” (4,428). In the KOG functional classification, 42,600 unigenes were assigned to at least one category, with 4.5% being functionally unknown (Supplementary Fig. C). The major groups were “generic function prediction only” (10,322, 24.9%), “posttranslational modifications” (4,626, 10.9%), and “signal transduction mechanisms” (3,832, 9%). Other categories included “carbohydrate transport” (2,610, 6.1%), “translation” (2,382, 5.6%), “transcription” (2,254, 5.3%), and “RNA processing” (2,014, 4.7%). Differentially expressed gene screening and KEGG enrichment analysis A total of 2,667 DEGs were identified. Specifically, 504 DEGs were identified (280 upregulated and 224 downregulated) in the H-CK vs. H-MeJA comparison. In the S-CK vs. S-MeJA comparison, 945 DEGs were identified (355 upregulated, 590 downregulated). In the H-CK vs. S-CK comparison, 733 DEGs were detected (316 upregulated, 417 downregulated). In the H-MeJA vs. S-CK comparison, 1,053 DEGs were observed (570 upregulated, 483 downregulated) (Fig. A and Supplementary Tables – ). In addition, 71 common DEGs were found between hydroponically grown H-CK and H-MeJA and between substrate-grown S-CK and S-MeJA, with 49 upregulated and 22 downregulated genes. Moreover, 259 common DEGs were identified in both the H-CK vs. S-CK and S-CK vs. S-MeJA comparisons, comprising 150 upregulated and 109 downregulated genes (Fig. B). The Rich factor, q value, and number of differentially enriched genes quantified the degree of KEGG enrichment (Supplementary Tables – ). The top 20 most significantly enriched pathway entries were selected for display, or all if fewer than 20 were enriched (Fig. , C-D). KEGG enrichment analysis revealed plant signaling and biometabolic pathways involving the DEGs in MeJA-treated hydroponic and substrate-cultured Chinese chive leaves. The photosynthesis-antenna protein pathway was significantly enriched in the H-CK group compared with the H-MeJA group (q value < 0.05) (Fig. C). In the S-CK vs. S-MeJA comparison, terpene skeleton biosynthesis, metabolic pathways, flavonoid biosynthesis, the biosynthesis of secondary metabolites, and the biosynthesis of keratin, cork lipids, and waxes were significantly enriched (q value < 0.05), with metabolic pathways and the biosynthesis of secondary metabolites being the most enriched for the DEGs (Fig. D). These metabolic pathways provide insight into the metabolic processes of MeJA-treated hydroponic and substrate-grown Chinese chives. Integrating transcriptome and metabolome analysis KEGG enrichment analysis of differentially abundant metabolites and DEGs (Supplementary Tables – ) revealed enriched pathways involving metastatic genes and differentially expressed genes. In the H-CK vs. H-MeJA comparison, the significant pathways included secondary metabolite biosynthesis (five DEGs, five metabolites), linoleic acid metabolism (one DEG, three metabolites), and phenylpropane biosynthesis (one DEG, three metabolites) (Fig. A). For the S-CK vs. S-MeJA comparison, the enriched pathways were pentose and glucuronide interconversion (11 DEGs, one metabolite), galactose metabolism (nine DEGs, one metabolite), linoleic acid metabolism (three DEGs, one metabolite), phenylalanine metabolism (12 DEGs, one metabolite), and metabolic pathways (123 DEGs, 14 metabolites) (Fig. B). Correlation analysis of genes and metabolites in each group was conducted using R to calculate the Pearson correlation coefficient (PCC) . A nine-quadrant plot was used to display fold changes of genes and metabolites with PCC > 0.8 in each differential group (Fig. ). In quadrants three and seven, the DEG expression patterns matched the metabolite accumulation patterns, suggesting positive regulation by these genes. A total of 1,233 DEGs and 46 metabolites were identified in H-CK vs. H-MeJA (Fig. A), and 6,337 DEGs and 67 metabolites were identified in S-CK vs. S-MeJA (Fig. B). Furthermore, correlation analysis of DEGs and differential metabolites (PCC > 0.8) revealed that 691 DEGs and 43 metabolites were correlated in H-CK vs. H-MeJA, and 3,952 DEGs and 57 metabolites were correlated in S-CK vs. S-MeJA (Supplementary Fig. ). Orthogonal Partial Least Squares (O2PLS), an unsupervised modeling approach, objectively describes potential associative trends between two datasets, minimizing the risk of false-positive associations at the source . An O2PLS model was constructed using all DEGs and differential metabolites. Through the loading plot, variables with high correlation and weight across different datasets were preliminarily identified, enabling the screening of key variables influencing the other omics dataset (Fig. ). The top 30 metabolites influenced by the transcriptome included various compounds (Fig. A). Among the top 30 genes affected by the metabolome, 15 presented high expression levels in MeJA-treated hydroponics and substrate-cultured Chinese chives compared with H-CK and S-CK, whereas five genes presented high expression levels in MeJA-treated Chinese chives (Fig. B). Exogenous MeJA induces the JA response pathway in Chinese chives Exogenous MeJA treatment caused the differential expression of 21 genes related to JA biosynthesis (Fig. ). These genes included eight LOX genes (Cluster-21234.114338, Cluster-21234.128472, Cluster-21234.138565, Cluster-21234.36411, Cluster-21234.94233, Cluster-21234.95845, Cluster-21234.96814, Cluster-21234.96818), one AOC gene (Cluster-21234.132318), four OPR genes (Cluster-21234.105696, Cluster-21234.114924, Cluster-21234.130394, Cluster-21234.91314), one JMT gene (Cluster-21234.134351), five JAZ genes (Cluster-21234.36168, Cluster-21234.66575, Cluster-21234.67939, Cluster-21234.95190, Cluster-24488.0), and two MYC2 genes (Cluster-21234.151127, Cluster-21234.34656). Exogenous MeJA application inhibited JA biosynthesis and signaling in hydroponically grown Chinese chives, resulting in the downregulation of the LOX, AOC, OPR, JMT, JAZ, and MYC2 genes involved in the JA pathway. Analysis of the biosynthetic pathways of flavor substance precursors (CSOs) S-Alk(en)ylcysteine sulfoxides (CSOs), which are common in Allium plants, are key flavor precursors. To understand the role of MeJA treatment in flavor regulation in Chinese chives, we identified 193 CSO biosynthesis-related genes from the transcriptome data through a homology search and gene annotation (Fig. ). Figure shows the analysis of DEGs and metabolite levels in the CSO pathway in MeJA-treated soilless Chinese chives. Four DEGs associated with sulfate metabolism were identified: two SULTRs, one OASTL, and one FMO (Fig. ). MeJA treatment upregulated AtuSULTR1.1 /Cluster-21234.58521 in hydroponic Chinese chives and AtuFMO1 /Cluster-21234.125825 in soilless Chinese chives. S-alkyl-L-cysteine levels in soilless Chinese chive leaves decreased significantly after MeJA application. The upregulated FMO gene (Cluster-21234.125825) increased the conversion of S-alkyl-L-cysteine to CSOs, increasing CSO accumulation (Fig. , IV-V). In addition, MeJA treatment elevated the levels of S-methyl-L-cysteine and methionine in hydroponically cultivated chive leaves. Analysis of FMO-regulated flavor substance precursor (CSO) biosynthesis FMO is an essential enzyme in CSO biosynthesis, making the inferred FMO gene Cluster-21234.125825 a candidate for detailed study. Phylogenetic analysis via the neighbor‒joining method with 28 FMO protein sequences from Arabidopsis and Cluster-21234.125825 revealed that Cluster-21234.125825 is closely related to At1G19250/ AtFMO1 (Fig. A). AtFMO1, a flavin-binding monooxygenase in Arabidopsis , suggested that AtuFMO1 /Cluster-21234.125825 encodes a similar enzyme. Despite its low transcriptional abundance, AtuFMO1 expression significantly increased with exogenous MeJA in hydroponically grown Chinese chives, indicating its crucial role in enhancing the flavor intensity of soilless-cultivated Chinese chives. Eighty DEGs were identified as potential transcription factors (TFs) in the transcriptome analysis. The WRKY, AP2/ERF-ERF, MYB, and bHLH families responded to exogenous MeJA treatments in hydroponically grown Chinese chives. It has been hypothesized that AtuFMO1 is targeted by multiple TFs with similar expression patterns. Cluster analysis revealed three TF groups whose expression differed in response to MeJA treatment (Fig. B). Clusters I and II were highly expressed in hydroponically grown Chinese chives but expressed at lower levels in substrate-cultivated Chinese chives. AtuFMO1 and nine related TFs were categorized within the γ subbranch of Cluster III, indicating similar expression patterns (Fig. B). Pearson’s correlation analysis of the RNA-Seq data, S-alkyl-L-cysteine levels, and leaf pungency intensity revealed significant correlations (Fig. , C-I). Cluster-21234.135402/PHL7 was significantly negatively correlated with S-alkyl-L-cysteine accumulation ( r = 0.8797, p < 0.01) (Fig. D), as were Cluster-21234.55598/AP2/ERF-ERF ( r = 0.7042, p < 0.05) (Fig. E) and Cluster-21234.96792/PHL7 ( r = 0.7790, p < 0.01) (Fig. F). Conversely, these TFs were positively correlated with leaf pungency intensity, particularly in Cluster-21234.135402/PHL7 ( r = 0.7910, p < 0.01), Cluster-21234.55598/AP2/ERF-ERF ( r = 0.5996, p < 0.05), and Cluster-21234.96792/PHL7 ( r = 0.6561, p < 0.05). These results suggest that these TFs may promote the conversion of S-alkyl-L-cysteine to conjugated sulfur compounds (CSOs), increasing pungency. Thus, Cluster-21234.135402/PHL7, Cluster-21234.55598/AP2/ERF-ERF, and Cluster-21234.96792/PHL7 may act as positive regulators of AtuFMO1 in MeJA-induced CSO biosynthesis. qRT‒PCR validation analysis Transcriptional profiles from RNA-Seq were validated by quantitative reverse transcription‒polymerase chain reaction (qRT‒PCR) in independent experiments. Eight genes were randomly selected for validation. Except for one gene (Cluster-21234.55598 (AP2/ERF-ERF)), the expression levels of the other seven genes matched the RNA-Seq data, showing 87.5% concordance (Fig. ). Therefore, the qRT‒PCR results confirmed the reliability of the RNA-Seq-generated gene expression profiles. The growth and nutritional quality of MeJA-treated Chinese chives in substrate and hydroponic media were assessed (Table ). Exogenous MeJA application significantly increased dry matter, leaf vitamin C, and flavonoid contents while reducing leaf nitrate levels in soilless-grown Chinese chives. In addition, MeJA improved the soluble sugar and total phenol contents in the leaves of both hydroponically and substrate-grown Chinese chives. To examine metabolite accumulation changes in MeJA-treated substrates and hydroponic Chinese chives, 923 metabolites were detected via UPLC‒MS/MS and GC‒MS. These metabolites were classified into 23 categories (Fig. A). UPLC‒MS/MS identified 737 metabolites, including 19 terpenoids, 20 lignans, coumarins, 51 nucleotides and derivatives, 64 alkaloids, 68 flavonoids, 71 organic acids, 84 amino acids and derivatives, 98 others, 122 phenolic acids, and 140 lipids. GC‒MS identified 186 metabolites, including one ether, one acid, three halogenated hydrocarbons, three other compounds, six terpenoids, seven amines, eight phenols, and 11 alcohols. In addition, 11 aldehydes, 12 hydrocarbons, 15 sulfur-containing compounds, 16 ketones, 23 aromatic hydrocarbons, 25 esters, and 44 heterocyclic compounds were identified. A clustering heatmap was used to analyze the accumulation patterns of these 923 metabolites (Supplementary Table ), revealing differences between MeJA-treated hydroponic and substrate-grown Chinese chives. The heatmap shows hierarchical clustering of the different cultivation methods and MeJA treatment groups (Fig. B). The H-CK, H-MeJA, S-CK, and S-MeJA treatments formed four categories, indicating variation in metabolite accumulation. Clustering of biological replicates demonstrated good homogeneity and reliability. PCA was used to reveal the metabolomic features of the samples, enabling the observation of group differences in the PCA plots. The PCA score plot clearly revealed a separation between treatments, suggesting considerable changes in the metabolites of the MeJA-treated hydroponics and substrate-grown Chinese chives (Fig. , C). PC1 explained 26.45% of the variance in the 923 metabolite datasets, emphasizing differences in cultivation methods, whereas PC2 accounted for 16.93% of the variance related to MeJA treatment. In the H-CK vs. H-MeJA groups, 45 differentially abundant metabolites were identified (11 upregulated and 34 downregulated); in the S-CK vs. S-MeJA groups, 68 (13 upregulated and 55 downregulated); in the H-CK vs. S-CK groups, 48 (26 upregulated and 22 downregulated); and in the H-MeJA vs. S-CK groups, 80 (27 upregulated and 53 downregulated) were identified (Fig. , D and E) (Supplementary Tables – ). Further refinement via Fold Change, VIP, and p value generated volcano plots (Fig. , A-B). In hydroponically cultivated Chinese chives, 31 differentially abundant metabolites were identified in the H-CK vs. H-MeJA comparison (6 upregulated, 25 downregulated), including significant upregulation of two phenolic acids, two lipids, one alkaloid, and one aldehyde and downregulation of one amino acid, two phenolic acids, one nucleotide, five flavonoids, one alkaloid, one organic acid, and 14 lipids (Fig. A). For substrate-cultured Chinese chives, 45 differential volatile compounds were identified in the S-CK vs. S-MeJA groups (7 upregulated, 38 downregulated), with significant upregulation of one terpene, one amino acid, three lipids, one ester, and one other compound. The downregulated genes included three amino acids, one amine, one alcohol, two aromatic hydrocarbons, two phenolic acids, five sulfur-containing compounds, one nucleotide, three flavonoids, five other classes, one aldehyde, two alkaloids, four organic acids, five heterocyclic compounds, and three esters (Fig. B). Differentially abundant metabolites were cross-referenced with the KEGG database to determine their associated pathways. Enrichment analysis of the annotated results revealed pathways with significant enrichment of the differentially abundant metabolites. In the H-CK and H-MeJA groups, most metabolites were enriched in phenylpropanoid, linoleic acid, folate, and flavonoid biosynthesis (Fig. C). In the S-CK vs. S-MeJA groups, the metabolites were enriched in purine metabolism; phosphatidylinositol metabolism; flavonoid biosynthesis; arginine biosynthesis; arginine and proline metabolism; and amino and nucleotide sugar metabolism (Fig. D). To obtain transcriptional data for MeJA-treated hydroponic and substrate-grown Chinese chives, 12 RNA-Seq libraries from the H-CK, H-MeJA, S-CK, and S-MeJA samples were sequenced via the Illumina NovaSeq platform. Quality control via Fastp software revealed clean read percentages ranging from 96.4 to 97.7% (Supplementary Table ). The mean Q20 and Q30 values were 97.79% and 93.61%, respectively, indicating high sequencing accuracy. The matching rate ranged from 76 to 80%, ensuring good transcriptome coverage. The transcripts obtained through Trinity splicing served as references for further analysis (Supplementary Table ). A total of 222,884 transcripts were generated, averaging 748 base pairs in length, with N50/N90 sizes of 1,080/325 and a total sequence base number of 166,807,729. Clustering with Corset produced 210,598 unigenes, averaging 748 bp in length, with N50/N90 sizes of 1106/343 and a total base count of 163,842,037. Unigenes were annotated via the KEGG, Nr, Trembl, KOG, Gene Ontology (GO), SwissProt, and Pfam databases, with 34.74% (73,169) matching at least one sequence. A total of 210,598 unigenes were annotated (Supplementary Table ). Compared with those in the NR library, the Chinese chive transcript sequences were highly similar to those in asparagus (33,307, 47.55%) and had fewer matches to those in oil palm (4,336, 6.19%), date palm (3,670, 5.24%), and other species (Supplementary Fig. A). A total of 56,546 unigenes were assigned to at least one GO term (Supplementary Fig. B and Supplementary Table ) and categorized into biological processes, cellular components, and molecular functions. The predominant biological processes included “cellular processes” (35,111), “metabolic processes” (31,376), and “stress responses” (14,416). The main cellular components were “cells” (40,925), “cell parts” (40,847), and “organelles” (31,621). The molecular function category included “binding” (33,439), “catalytic activity” (31,879), and “transporter protein activity” (4,428). In the KOG functional classification, 42,600 unigenes were assigned to at least one category, with 4.5% being functionally unknown (Supplementary Fig. C). The major groups were “generic function prediction only” (10,322, 24.9%), “posttranslational modifications” (4,626, 10.9%), and “signal transduction mechanisms” (3,832, 9%). Other categories included “carbohydrate transport” (2,610, 6.1%), “translation” (2,382, 5.6%), “transcription” (2,254, 5.3%), and “RNA processing” (2,014, 4.7%). A total of 2,667 DEGs were identified. Specifically, 504 DEGs were identified (280 upregulated and 224 downregulated) in the H-CK vs. H-MeJA comparison. In the S-CK vs. S-MeJA comparison, 945 DEGs were identified (355 upregulated, 590 downregulated). In the H-CK vs. S-CK comparison, 733 DEGs were detected (316 upregulated, 417 downregulated). In the H-MeJA vs. S-CK comparison, 1,053 DEGs were observed (570 upregulated, 483 downregulated) (Fig. A and Supplementary Tables – ). In addition, 71 common DEGs were found between hydroponically grown H-CK and H-MeJA and between substrate-grown S-CK and S-MeJA, with 49 upregulated and 22 downregulated genes. Moreover, 259 common DEGs were identified in both the H-CK vs. S-CK and S-CK vs. S-MeJA comparisons, comprising 150 upregulated and 109 downregulated genes (Fig. B). The Rich factor, q value, and number of differentially enriched genes quantified the degree of KEGG enrichment (Supplementary Tables – ). The top 20 most significantly enriched pathway entries were selected for display, or all if fewer than 20 were enriched (Fig. , C-D). KEGG enrichment analysis revealed plant signaling and biometabolic pathways involving the DEGs in MeJA-treated hydroponic and substrate-cultured Chinese chive leaves. The photosynthesis-antenna protein pathway was significantly enriched in the H-CK group compared with the H-MeJA group (q value < 0.05) (Fig. C). In the S-CK vs. S-MeJA comparison, terpene skeleton biosynthesis, metabolic pathways, flavonoid biosynthesis, the biosynthesis of secondary metabolites, and the biosynthesis of keratin, cork lipids, and waxes were significantly enriched (q value < 0.05), with metabolic pathways and the biosynthesis of secondary metabolites being the most enriched for the DEGs (Fig. D). These metabolic pathways provide insight into the metabolic processes of MeJA-treated hydroponic and substrate-grown Chinese chives. KEGG enrichment analysis of differentially abundant metabolites and DEGs (Supplementary Tables – ) revealed enriched pathways involving metastatic genes and differentially expressed genes. In the H-CK vs. H-MeJA comparison, the significant pathways included secondary metabolite biosynthesis (five DEGs, five metabolites), linoleic acid metabolism (one DEG, three metabolites), and phenylpropane biosynthesis (one DEG, three metabolites) (Fig. A). For the S-CK vs. S-MeJA comparison, the enriched pathways were pentose and glucuronide interconversion (11 DEGs, one metabolite), galactose metabolism (nine DEGs, one metabolite), linoleic acid metabolism (three DEGs, one metabolite), phenylalanine metabolism (12 DEGs, one metabolite), and metabolic pathways (123 DEGs, 14 metabolites) (Fig. B). Correlation analysis of genes and metabolites in each group was conducted using R to calculate the Pearson correlation coefficient (PCC) . A nine-quadrant plot was used to display fold changes of genes and metabolites with PCC > 0.8 in each differential group (Fig. ). In quadrants three and seven, the DEG expression patterns matched the metabolite accumulation patterns, suggesting positive regulation by these genes. A total of 1,233 DEGs and 46 metabolites were identified in H-CK vs. H-MeJA (Fig. A), and 6,337 DEGs and 67 metabolites were identified in S-CK vs. S-MeJA (Fig. B). Furthermore, correlation analysis of DEGs and differential metabolites (PCC > 0.8) revealed that 691 DEGs and 43 metabolites were correlated in H-CK vs. H-MeJA, and 3,952 DEGs and 57 metabolites were correlated in S-CK vs. S-MeJA (Supplementary Fig. ). Orthogonal Partial Least Squares (O2PLS), an unsupervised modeling approach, objectively describes potential associative trends between two datasets, minimizing the risk of false-positive associations at the source . An O2PLS model was constructed using all DEGs and differential metabolites. Through the loading plot, variables with high correlation and weight across different datasets were preliminarily identified, enabling the screening of key variables influencing the other omics dataset (Fig. ). The top 30 metabolites influenced by the transcriptome included various compounds (Fig. A). Among the top 30 genes affected by the metabolome, 15 presented high expression levels in MeJA-treated hydroponics and substrate-cultured Chinese chives compared with H-CK and S-CK, whereas five genes presented high expression levels in MeJA-treated Chinese chives (Fig. B). Exogenous MeJA treatment caused the differential expression of 21 genes related to JA biosynthesis (Fig. ). These genes included eight LOX genes (Cluster-21234.114338, Cluster-21234.128472, Cluster-21234.138565, Cluster-21234.36411, Cluster-21234.94233, Cluster-21234.95845, Cluster-21234.96814, Cluster-21234.96818), one AOC gene (Cluster-21234.132318), four OPR genes (Cluster-21234.105696, Cluster-21234.114924, Cluster-21234.130394, Cluster-21234.91314), one JMT gene (Cluster-21234.134351), five JAZ genes (Cluster-21234.36168, Cluster-21234.66575, Cluster-21234.67939, Cluster-21234.95190, Cluster-24488.0), and two MYC2 genes (Cluster-21234.151127, Cluster-21234.34656). Exogenous MeJA application inhibited JA biosynthesis and signaling in hydroponically grown Chinese chives, resulting in the downregulation of the LOX, AOC, OPR, JMT, JAZ, and MYC2 genes involved in the JA pathway. S-Alk(en)ylcysteine sulfoxides (CSOs), which are common in Allium plants, are key flavor precursors. To understand the role of MeJA treatment in flavor regulation in Chinese chives, we identified 193 CSO biosynthesis-related genes from the transcriptome data through a homology search and gene annotation (Fig. ). Figure shows the analysis of DEGs and metabolite levels in the CSO pathway in MeJA-treated soilless Chinese chives. Four DEGs associated with sulfate metabolism were identified: two SULTRs, one OASTL, and one FMO (Fig. ). MeJA treatment upregulated AtuSULTR1.1 /Cluster-21234.58521 in hydroponic Chinese chives and AtuFMO1 /Cluster-21234.125825 in soilless Chinese chives. S-alkyl-L-cysteine levels in soilless Chinese chive leaves decreased significantly after MeJA application. The upregulated FMO gene (Cluster-21234.125825) increased the conversion of S-alkyl-L-cysteine to CSOs, increasing CSO accumulation (Fig. , IV-V). In addition, MeJA treatment elevated the levels of S-methyl-L-cysteine and methionine in hydroponically cultivated chive leaves. FMO is an essential enzyme in CSO biosynthesis, making the inferred FMO gene Cluster-21234.125825 a candidate for detailed study. Phylogenetic analysis via the neighbor‒joining method with 28 FMO protein sequences from Arabidopsis and Cluster-21234.125825 revealed that Cluster-21234.125825 is closely related to At1G19250/ AtFMO1 (Fig. A). AtFMO1, a flavin-binding monooxygenase in Arabidopsis , suggested that AtuFMO1 /Cluster-21234.125825 encodes a similar enzyme. Despite its low transcriptional abundance, AtuFMO1 expression significantly increased with exogenous MeJA in hydroponically grown Chinese chives, indicating its crucial role in enhancing the flavor intensity of soilless-cultivated Chinese chives. Eighty DEGs were identified as potential transcription factors (TFs) in the transcriptome analysis. The WRKY, AP2/ERF-ERF, MYB, and bHLH families responded to exogenous MeJA treatments in hydroponically grown Chinese chives. It has been hypothesized that AtuFMO1 is targeted by multiple TFs with similar expression patterns. Cluster analysis revealed three TF groups whose expression differed in response to MeJA treatment (Fig. B). Clusters I and II were highly expressed in hydroponically grown Chinese chives but expressed at lower levels in substrate-cultivated Chinese chives. AtuFMO1 and nine related TFs were categorized within the γ subbranch of Cluster III, indicating similar expression patterns (Fig. B). Pearson’s correlation analysis of the RNA-Seq data, S-alkyl-L-cysteine levels, and leaf pungency intensity revealed significant correlations (Fig. , C-I). Cluster-21234.135402/PHL7 was significantly negatively correlated with S-alkyl-L-cysteine accumulation ( r = 0.8797, p < 0.01) (Fig. D), as were Cluster-21234.55598/AP2/ERF-ERF ( r = 0.7042, p < 0.05) (Fig. E) and Cluster-21234.96792/PHL7 ( r = 0.7790, p < 0.01) (Fig. F). Conversely, these TFs were positively correlated with leaf pungency intensity, particularly in Cluster-21234.135402/PHL7 ( r = 0.7910, p < 0.01), Cluster-21234.55598/AP2/ERF-ERF ( r = 0.5996, p < 0.05), and Cluster-21234.96792/PHL7 ( r = 0.6561, p < 0.05). These results suggest that these TFs may promote the conversion of S-alkyl-L-cysteine to conjugated sulfur compounds (CSOs), increasing pungency. Thus, Cluster-21234.135402/PHL7, Cluster-21234.55598/AP2/ERF-ERF, and Cluster-21234.96792/PHL7 may act as positive regulators of AtuFMO1 in MeJA-induced CSO biosynthesis. Transcriptional profiles from RNA-Seq were validated by quantitative reverse transcription‒polymerase chain reaction (qRT‒PCR) in independent experiments. Eight genes were randomly selected for validation. Except for one gene (Cluster-21234.55598 (AP2/ERF-ERF)), the expression levels of the other seven genes matched the RNA-Seq data, showing 87.5% concordance (Fig. ). Therefore, the qRT‒PCR results confirmed the reliability of the RNA-Seq-generated gene expression profiles. Chinese chives are valued for their unique flavor profile and phytochemical composition , yet remain understudied compared to other Allium species. This study provides comprehensive analysis of metabolite profiles and regulatory mechanisms in hydroponic versus substrate-cultivated Chinese chives under MeJA treatment, identifying 923 metabolites classified into 23 groups (Fig. ). Previous analyses of broadleaf and cultivated Chinese chives detected 341 metabolites through GC‒MS and LC‒MS , whereas LC‒MS of 30 garlic samples identified 472 metabolites . Distinct hierarchical clustering patterns revealed significant variations in metabolite accumulation between different cultivation methods and MeJA treatment groups. In addition, these differences in flavor precursor compositions underscore the diversity among Allium species; specific precursor ratios are characteristic to each species . The identification of S-allyl-l-cysteine and S-methyl-l-cysteine (Fig. ) further aligns with established literature showing that allicin forms a predominant portion of the flavor profile in Allium species, where methionine plays a critical role . Xia et al. (2022) reported that the methionine-to-allicin ratio in Chinese chives is approximately 6:1 . Notably, this study’s detection of methionine and cyclic allicin but failed to detect allicin or isoallicin, despite their reported presence in Chinese chives . This observation highlights the necessity of employing cultivation method-specific analytical protocols, along with advanced high-resolution mass spectrometry and precise analytical tools, for flavor compound analysis. The KEGG enrichment analysis indicates that the production of bioactive compounds in hydroponically cultivated Chinese chives engages multiple metabolic pathways, including those involved in phenylpropanoid, flavonoid biosynthesis and linoleic acid metabolism (Fig. C). In contrast, substrate-cultivated Chinese chives utilize pathways centered on purine, flavonoid and arginine biosynthesis, and inositol phosphate metabolism (Fig. D). Glutathione, the main organic sulfur compound for transport and storage in plants, begins its biosynthesis of cysteine sulfoxide with the S-alkylation of glutathione. Importantly, glutathione metabolism, critical for sulfur compound biosynthesis, was enriched in both cultivation methods, suggesting that modifications in metabolic pathways due to MeJA treatments may enhance flavor formation through alterations in glutathione dynamics. Allium plants are characterized by their richness in sulfur compounds, which contribute to both sensory properties and bioactivities, including antioxidant and antibacterial effects . Transcriptomic analyses provided insight into the molecular mechanisms governing flavor development, identifying 193 candidate genes associated with cysteine sulfoxide (CSO) biosynthesis (Fig. ). Hydroponic systems preferentially expressed Cluster-21234.127212 ( AtuSULTR1.2 ), whereas substrate-grown Chinese chives upregulated Cluster-21234.94215 ( AtuSULTR1.2 ) (Fig. ). This isoform switching suggests adaptive regulation of sulfate uptake mechanisms, potentially optimizing sulfur acquisition under distinct root zone environments. Notably, the significantly higher SULTR expression observed in hydroponic Chinese chives indicates that MeJA application enhances sulfate transport capacity under controlled nutrient conditions, playing a crucial role in ensuring efficient production of flavor compounds. Downstream of sulfate uptake, the study revealed notable variations in key enzymatic steps within the sulfur assimilation pathway. During sulfate assimilation, transcripts such as Cluster-21234.119553 and Cluster-281.0 appear to regulate adenosine-5’-phosphosulfate production consistently across cultivation methods. However, in the cysteine biosynthesis pathway, distinct expression patterns of adenosine-5’-phosphosulfate reductase (APR) isoforms (e.g., Cluster-21234.92656 and Cluster-21234.94248, both annotated as AtuAPR3 ) underscore the influence of cultivation mode on metabolic fluxes. The impact of MeJA treatment is further evident in the regulation of genes involved in converting sulfite to organic sulfur compounds and, ultimately, in glutathione biosynthesis. The significant upregulation of AtuSiR1 (Clusters 21234.100489 and 21234.100490) implies enhanced conversion efficiency of sulfite, while the elevated expression of AtuGCL1 (Cluster-21234.110420) in soilless Chinese chives accentuates glutathione synthesis following elicitor application. Similarly, the GGT gene cluster (Cluster-21234.36292) and additional GGT genes (Clusters 21234.75176 and 21234.103679) showed marked induction in soilless Chinese chives, reinforcing their roles in CSO biosynthesis under MeJA treatment. These findings suggest that MeJA orchestrates sulfur flux toward flavor compound production. Furthermore, the hydrolysis of CSOs—catalyzed by alliinase enzymes that convert these non-volatile precursors into the volatile sulfur compounds responsible for Allium aroma—was significantly influenced by MeJA. Out of 58 alliinase genes identified, 13 were highly expressed in soilless Chinese chives post-treatment. This indicates that MeJA not only enhances sulfur metabolism but also modulates the final steps of flavor compound biosynthesis in soilless systems. Moreover, 12-oxo-phytodienoic acid reductase (OPR) activity affects JA biosynthesis by modulating OPDA . MeJA has also been shown to influence secondary metabolic pathways related to JA anabolism and signaling . MeJA treatment significantly influenced JA-mediated signaling pathways, as evidenced by the enrichment of DEGs in “JA-mediated signaling pathway” (GO:0009867) and “cellular response to JA stimulus” (GO:0071395). This aligns with previous findings that exogenous MeJA reduces JA synthase activity (e.g., LOX, AOC, and OPR) and JA accumulation in Chinese chives . Additionally, MeJA modulated secondary metabolic pathways, including flavonoid metabolism, sulfur ester metabolism, and lipid oxidation, as revealed by GO and KEGG enrichment analyses. These pathways are critical for metabolite accumulation and flavor compound biosynthesis, highlighting MeJA’s role as a regulator of primary and secondary metabolism. The observed inhibition of LOX, AOC, and OPR genes (Fig. ) suggests a feedback suppression mechanism where exogenous MeJA downregulates endogenous JA biosynthesis. This paradoxical regulation contrasts with typical JA signaling patterns in model plants, possibly reflecting evolutionary adaptation in Allium species to prioritize secondary metabolite production over defense signaling when exposed to external elicitors that warrants further investigation. Crucially, MeJA simultaneously activated sulfur assimilation genes (SULTRs, OASTL) and AtuFMO1 (Fig. ), creating a metabolic channeling effect that directs sulfur flux toward CSO synthesis rather than general protein synthesis. Moreover, exogenous MeJA promoted growth, nutritional quality, antioxidant capacity and dry matter accumulation in Chinese chives (Table ), demonstrating its potential application in improving agricultural practices. The unique flavor of onion vegetables stems from the conversion of tasteless CSO precursors into volatile sulfur compounds by alliinase upon tissue disruption . Although previous studies have characterized the CSO metabolic pathway in several Allium species , the impact of external hormonal stimuli on CSO biosynthesis and the underlying regulatory mechanisms remain largely unexplored. This study provides critical insights into the molecular regulation of CSO biosynthesis in Chinese chives under exogenous MeJA treatment. Our results demonstrate that AtuFMO1 , an enzyme analogous to AsFMO1 in garlic , plays a pivotal role in mediating MeJA-induced CSO biosynthesis. Specifically, enhanced expression of AtuFMO1 , in response to MeJA (Fig. ), appears to drive the rate-limiting sulfur monooxygenation step required for S-allyl cysteine sulfoxide (ACSO) production. This finding reinforces the central role of FMO enzymes in linking hormonal signaling to flavor compound formation. Furthermore, our integrated transcriptomic and metabolomic analyses identified a significant coexpression of several transcription factor families—namely MYB, bHLH, NAC, and AP2/ERF—with AtuFMO1 (Fig. B). In addition to AtuFMO1 , MYB subfamily members, including MYB28, MYB34, and MYB51, play crucial roles in sulfur metabolism and abiotic stress induction . bHLH TFs are essential for thioglucoside biosynthesis through the MYB-bHLH complex . AP2/ERF transcription factors have been identified as potential regulators of MYB-bHLH complexes . Notably, TFs such as Cluster-21234.135402 (PHL7), Cluster-21234.55598 (AP2/ERF-ERF), and Cluster-21234.96792 (PHL7) exhibited strong positive correlations with metabolites related to CSO biosynthesis. These regulatory factors may modulate sulfur metabolism by promoting the transcription of AtuFMO1 , thereby enhancing the conversion of S-alkyl-L-cysteine into CSOs and intensifying the characteristic pungency of Chinese chives. Critically, while these correlations provide a compelling model for the transcriptional regulation of CSO biosynthesis under MeJA influence, the study has limitations. The functional roles of these candidate TFs, including AtuPHL7 and AtuAP2/ERF-ERF, remain to be verified at the protein level. Future investigations employing proteomic techniques, such as Western blotting or enzyme activity assays, are required to validate these interactions and elucidate the molecular mechanisms by which TFs regulate AtuFMO -mediated metabolism of garlic flavor compounds. Exogenous methyl jasmonate (MeJA) enhances the pungent flavor of soilless-cultivated Chinese chives by activating key enzyme-encoding genes ( AtuFMO1 ) involved in the biosynthesis of CSOs, thereby increasing the accumulation of flavor precursors such as methiin and alliin. Our integrated metabolomic and transcriptomic analyses revealed that MeJA suppresses endogenous JA biosynthesis while promoting sulfur assimilation and CSO synthesis pathways. Notably, AtuFMO1 , regulated by transcription factors AtuPHL7 and AP2/ERF-ERF, plays a pivotal role in catalyzing the sulfoxidation of S-alkyl-L-cysteine, a rate-limiting step in CSO production. These findings establish a framework for understanding how transcription factors (TFs) regulate AtuFMO -mediated metabolism of garlic flavor compounds, enhancing the pungent flavor of Chinese chives. This study highlights the potential of combining transcriptomic insights with agronomic practices to optimize flavor traits in Allium crops. Future research should focus on exploring the synergistic interactions between MeJA and other elicitors. Breeding strategies should focus on selecting Chinese chive varieties with high AtuFMO1 expression and its associated TFs (e.g., AtuPHL7), utilizing marker-assisted selection or gene-editing tools to develop cultivars with inherently enhanced CSO biosynthetic capacity. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2
Cardiomyocyte electrophysiology and its modulation: current views and future prospects
62de78f8-055e-4f9b-bd19-d8d89cb201a2
10150219
Physiology[mh]
. Classical experiments: Silvio Weidmann (1921–2005) The heart is the most important and prominent biological oscillator and is critical to most multicellular animal life. Its functional disruption causes death or disease. Understanding both normal and abnormal cardiomyocyte physiology is thus of fundamental scientific and clinical importance. It involves mechanisms operating at multiple cellular levels, ranging from the cell membranes and their molecular and cellular signalling machinery, through function in entire atrial and ventricular chambers and their conducting and pacing tissue, to systemic modulation by central and peripheral nervous and endocrine mechanisms. Much of this area and its application date from Silvio Weidmann's (1921–2005) pioneering experiments. This article and this Phil. Trans. R. Soc. issue it introduces, prefaced by DiFrancesco & Noble , falls close to and celebrates Weidmann's 100th birthday. Weidmann was first to record accurate cardiomyocyte action potentials (APs), the functional basis of cardiac electrophysiological activation, in the 1950s, employing recently invented Ling–Gerard glass microelectrodes . He demonstrated and clarified the contributions of Na + and K + currents, I Na and I K , to the initiation and conduction of excitation and its subsequent repolarization and recovery from refractoriness. Ventricular, atrial and Purkinje cardiomyocyte APs showed relatively rapid (less than 1 ms) upstrokes whose amplitude, in contrast to background resting potentials, depended upon extracellular [NaCl]. This implicated a selective transient Na + permeability reflecting a local anaesthetic-sensitive, inward voltage-dependent I Na paralleling findings in nerve. The subsequent, more gradual, AP recoveries to the resting potential varied in timescale and waveform between atria, and ventricles and Purkinje fibres with their prolonged plateau phases . Membrane impedance determinations identified the recoveries with inward rectifying rapid outward K + current, I Kr . Following recovery, Purkinje fibres additionally showed depolarizing pacemaker currents , potentially leading to re-excitation and repetitive activity. Weidmann's work then anticipated connexin gap junction-mediated AP propagation and relationships between membrane voltage, extracellular Ca 2+ and contraction . . Cardiac arrhythmias: a major public health problem These early observations were key to the development of the cardiac electrophysiological field and the continuing productive and constructive dialogue between its fundamental science and clinical applications bearing on normal and abnormal cardiac activity. The latter results in the major public health problem of cardiac arrhythmias, a leading cause of clinical mortality and morbidity, second in incidence only to all cancers combined. Sinus node disorders (SND) form the major indication for pacemaker implantation worldwide. Atrial fibrillation (AF) affects 1 : 10 adults aged >60 years , increasing stroke incidences and all-cause mortality . Ventricular arrhythmias precipitating sudden cardiac death (SCD) are a major cause of mortality in cardiac failure, and associated metabolic, including common diabetic and ischaemic, conditions . The early cardiac electrophysiological studies led to the classical Singh–Vaughan Williams classification scheme simultaneously classifying physiological targets governing cardiac rhythm and the then known cardiotropic drugs ( a (i)) . It provided widely useful clinical guidelines . Here, Class I drugs targeted I Na , reducing AP phase 0 slopes and overshoots, paralleling Weidmann's findings , and varyingly affecting AP duration (APD) and effective refractory period (ERP). Class II β-adrenergic inhibitors slowed sino-atrial node (SAN) pacing and atrioventricular node (AVN) conduction . Class III voltage-gated K + channel blockers delayed AP phase 3 repolarization, lengthening ERPs. Class IV L-type Ca 2+ channel inhibitors reduced cardiac, particularly SAN and AVN, rate and conduction . . Modern developments in the field Subsequent cardiac electrophysiological studies greatly advanced our understanding of events underlying pacing, electrical activity and its propagation through specialized conducting tissue into successive atrial, ventricular and conducting regions at the molecular and cellular as well as the systems levels. These studies demonstrated and characterized extensive numbers of novel ion channel, ion transport and receptor protein molecules . Many such insights, particularly their translation to roles in normal and arrhythmic activity at the systems level, suggesting novel pharmacological and therapeutic applications, came from monogenically modified murine platforms . Murine and human hearts share dual right- and left-sided circulations, distinct structurally homologous atria and ventricles, and pacing or conducting SAN, AVN and atrioventricular (AV) bundles. They did show differences in size, heart rate, L-type Ca 2+ current I CaL and transient outward K + current contributions ( I to ) and consequent APD. Nevertheless, major features of AP depolarization and conduction, transmural conduction velocities, relationships between APDs and ERPs and differences in transmural APD heterogeneities remain conserved . Finally, single cardiomyocyte isolations from these preparations permitted cellular-level experimental studies. In the current theme issue, Salvage et al . , Remme , Terrar , Jung et al . and He et al . review subsequent findings emerging from such genetic platforms; Anderson et al . implicate circadian variations in sympathetic actions on pacemaker ion channel gene transcription in diurnal cardiac rate variations in wild-type (WT) murine hearts. Complementary, theoretical, reconstructions then predict the physiological end-effects of the changes observed (Alrabghi et al . ; Hancox et al. ). More recently, genetically modified induced pluripotent stem cell (iPSC) platforms have shown promise, likely as cellular rather than systems models, lacking the anatomically related in vivo conducting (Purkinje cell) and contractile (cardiomyocyte) tissue organization involved in initiating and maintaining cardiac arrhythmias. Many available human pluripotent stem cell-derived cardiomyocyte (hiPSC-CM) monolayers show immature embryonic-like as opposed to human adult atrial/ventricular myocardial functional and structural phenotypes, limiting their translational utility . They showed low resting membrane potentials , low/absent I K1 , low membrane capacitances , immature AP profiles and slow electric impulse propagation velocities , and their generation primarily focused on ventricular rather than atrial phenotypes. However, Ahmad et al . describe hiPSC-CMs with AP properties and acetylcholine (ACh)-activated I K expression characteristic of atrial cells. iPSCs have also been explored as possible models for normal and disease-related changes in ion channel expression, Ca 2+ homeostatic phenotypes, neurocardiac interactions and cardiac hypertrophic change (see: Chen et al . ; Zhou et al . ; Li et al . and Langa et al . , respectively). Finally, direct human clinical electrophysiological studies continue to generate important scientific and translational insights into cardiac arrhythmic phenomena. Thus, recent electrocardiographic and electrical mapping studies distinguished potential roles of focal, Purkinje system activity from rotor activity in initiating and maintaining electrophysiologically and pharmacologically distinct polymorphic ventricular tachycardic (VT) or fibrillatory subtypes. These findings have potential implications for the clinical management of post-myocardial infarction sudden cardiac arrest. This theme issue discusses novel targets and their actions on excitable activity at multiple levels of cardiac functional organization established in this subsequent work as outlined in this introductory review, using standard texts as starting point ( b ). Thus normal and arrhythmic activity ( b (i)) immediately arises from ( b (ii)) surface membrane ion channels and their interactions underlying automaticity and pacemaking, and AP excitation, propagation and recovery (§§4 and 5 below). These membrane-level events initiate and are modulated by (iv) cellular-level feed-forward and feedback effects of excitation–contraction coupling and its Ca 2+ -mediated triggering (§6). Both these are modulated by (iii) G-protein-mediated autonomic inputs and the central nervous system circadian rhythms that these may transmit (§7). Of increasing interest are the longer-term regulatory mechanisms related to (v) metabolic feedback (§8) and other upstream target modulators (§9) causing potentially pathological electrophysiological and structural remodelling. All these regulatory events ultimately bear on surface membrane ion channel function in (ii), through which the arrhythmic outcomes emerge. These article sections are keyed to the individual articles in this Phil. Trans. theme issue. . Ion channels contributing to cardiomyocyte surface membrane excitation Normal cardiac rhythm requires a normal, regular, SAN automaticity. Inward, hyperpolarization-induced cyclic-nucleotide-activated channel (HCN)-mediated I f and other ionic currents combine with electrogenic Na + /Ca 2+ exchange (NCX) contributions driven by store Ca 2+ release (§6). Together these drive a time-dependent membrane potential depolarization from background resting levels to the Ca 2+ channel threshold. The resulting excitation initiates Na + current and consequent AP excitation at the outer rim of the SAN . Donald & Lakatta review recent discoveries bearing on the coupled-clock system from the cellular level, within the context of a complex cellular SAN organization. This pacing is modulated by adrenergic or cholinergic SAN pacemaker stimulation or inhibition (§7 below). Altered SAN automaticity causing abnormal or altered AP generation can arise from SAN malfunction, SND, or altered background diastolic or resting potentials. Abnormal automaticity can also arise with abnormal AVN or Purkinje tissue pacemaker activity when spontaneous impulses are generated in pathologically partially depolarized fibres, and can even involve normally non-automatic atrial and ventricular muscle. These latter circumstances can cause an automatic, often tachycardic, firing distinct from SAN activity. The ensuing APs form the functional unit of cardiomyocyte excitable activity. These are driven by a sequence of inward ( a ) and outward ( b ) currents mediating successive rapid depolarizing (phase 0), early repolarizing (phase 1), brief atrial ( c ) and prolonged ventricular ( d ) plateau (phase 2), late repolarization (phase 3) and electrically diastolic phases (phase 4). Inward I Na activation initiates the propagated AP phase as well as the remaining sequence of electrical events. Genetic evidence for loss or gain of I Na function correlates with pro-arrhythmic human Brugada (BrS) and long-QT3 syndromes (LQTS3), respectively. Recent findings reviewed here further report feedback actions on I Na activation (Salvage et al. ) and potentially pro-arrhythmic late I NaL currents (Liu et al . ) by further, downstream, excitation–contraction coupling (§5) and metabolic events (§7). All these effects were recapitulated in loss or gain of function genetic murine models affecting Nav1.5 and RyR2 function , and metabolic activation . Furthermore, electrophysiological aberrations and arrhythmic tendency in the BrS and LQTS3 models were similarly accentuated or relieved by flecainide and ameliorated or accentuated by quinidine , findings with potential translational significance . Remme reviews complex Nav1.5 functional and distribution patterns involving particular subcellular cardiomyocyte subdomains, as well as non-canonical non-electrogenic Nav1.5 actions with structural, potentially cardiomyopathic and pro-arrhythmic, effects. Finally, Nav1.5 does occur in other cell types, including various extracardiac tissues. Conversely, cardiomyocytes may express other than Nav1.5 subtypes. AP conduction involves local circuit currents through connexin channels connecting adjacent cardiomyocytes. Their magnitudes are determined by maximum rates of AP depolarization (d V /d t ) max , themselves dependent upon membrane capacitance and cytosolic resistance . The resulting AP propagation produces a coherent wave of excitation followed by refractoriness, of wavelength λ . This propagates through gap junction connexin and possible ephaptic connections between successive SAN, atrial, AV, Purkinje and endocardial and epicardial ventricular cardiomyocytes . The wavelength λ is normally sufficiently long to prevent re-excitation of recovered tissue behind the wave. Abnormal conduction slowing, shortening λ , can follow functional reductions in I Na or anatomical changes altering tissue electrical resistance or the functional or anatomical conducting pathway (§7; ). These can also produce heterogeneities in refractoriness and conduction in the conducting circuit. These heterogeneities can vary with time and previous impulse activation, and produce either total and unidirectional conduction block. Finally, at the temporal rather than spatial level, ERPs extend beyond each AP. They can increase with Na + channel inhibition, delaying the point at which a critical proportion of Na + channels have recovered, or with AP prolongation . These changes potentially cause re-entrant substrate perpetuating triggering events into sustained arrhythmias . These can involve spatial conduction heterogeneities, exemplified by transmural gradients across the ventricular wall, or temporal heterogeneities with abnormal AP recovery reflecting altered relative timings between AP recovery, refractoriness and repolarization reserve . Thus, discrepancies between ERP and AP recovery times occur in LQTS. Arrhythmias arising from isolated, decay of or block of impulse conduction can also occur in the absence of re-entrant pathways. Thus, a sino-atrial (SA) conduction block permits escape of a supraventricular or ventricular focus which generates abnormal impulses. Similar phenomena can follow delayed or blocked AV conduction . Different ion channels offer complementary contributions to AP characteristics with differing effects on heart rhythm reflected in turn in different modes of action of particular anti-arrhythmic drugs . Drugs acting on I Na alter the AP depolarization phase 0. Of these, Class Ia drugs bind to the Nav1.5 open state with τ ≈ 1–10 s dissociation time constants, inhibiting AV conduction and increasing ERPs, additionally increasing APD by a concomitant I K block. Class Ib agents bind preferentially to the Nav1.5 inactivated state, from which their more rapid τ ≈ 0.1–1.0 s dissociation minimizes their actions through successive cardiac cycles. Class Ic drugs bind to inactivated channels with a slow τ > 10 s dissociation giving a use-dependent channel block, slowing AV conduction, but little affecting APD. A new Class Id blocks pro-arrhythmic late Na + current ( I NaL ) in LQTS3, and pathological bradycardic and ischaemic conditions, and cardiac failure. Class Id drugs shorten APD and increase refractoriness and repolarization reserve . . Ion channels contributing to cardiomyocyte surface membrane recovery AP depolarization activates further channels both initiating contraction and restoring the resting membrane potential. The consequent AP waveforms vary with cell type: atrial cells show shorter APs than ventricular cells ( c,d ) . Ca 2+ channel (Cav1.2) activation, localized within the transverse tubules , detailed in the next section, contributes to the phase 2 plateau. In certain cardiomyocyte such as SAN and AVN types (see §4), this instead of Nav1.5 initiates excitable activity. Ca 2+ channel abnormalities can also cause arrhythmic phenotypes . Zeng et al . associate variants of pro-arrhythmic J wave syndromes, also found with loss of Nav1.5 function, with loss of Ca 2+ channel function, CACNB2b-S143F and CACNA1C-G37R , mutations. AP repolarization ultimately restoring the resting potential is driven by a range of outward K + currents ( b ) , for which a wide range of new K + channel subtypes have been described . Of these, transient outward Kv4.3 and Kv4.2-mediated I to currents drive the early phase 1 AP repolarization terminating phase 0 depolarization. The prominent I to , together with atrial-specific Kv1.5 ( KCNA5 )-mediated ultra-rapid I Kur , and the GIRK1- and GIRK4-mediated ACh-sensitive I KACh , result in the shorter atrial than ventricular APD. Gain of function Kv4.3 and Kv4.2 mutations have been implicated in AF. Alrabghi et al . model human atrial cells in computational reconstructions of atrial tissue and intact atria, to replicate reductions in APD, plateau, ERP and consequent λ , enhancing AP re-entry and facilitating AF. In ventricular myocytes, Kv11.1 (HERG or KCNH2) -mediated I Kr rapidly activates with phase 0 AP depolarization. It then rapidly inactivates over AP phases 0–2 . Phase 3 repolarization then re-activates I Kr , permitting outward phase 3 and early phase 4 currents terminating the plateau. By contrast, Kv7.1 ( KCNQ1 )-mediated I Ks activates more slowly over phase 2, becoming a major persistent phase 3 K + conductance. Kir2.1, Kir2.2 and Kir2.3 ( KCNJ2 , KCNJ12 and KCNJ4) mediate inwardly rectifying I K1 . This produces a reduced K + conductance at voltages greater than −20 mV in phases 0–2 while producing outward currents with repolarization to less than −40 mV late in phase 3. It also stabilizes phase 4 diastolic resting potentials. Cardiomyocyte resting potentials are further stabilized by background K 2P 2.1 ( KCNK2 , expressing K 2P currents), and the normally small adenosine triphosphate (ATP)-sensitive Kir6.2 ( KCNJ11) mediating I KATP . However, the latter can be activated by reduced intracellular ATP levels . Finally, Li et al . review effects of further, small-conductance Ca 2+ -activated K + (SK) channels on excitability in both normal and pathological conditions. Loss -of-K + channel function abnormalities are associated with pro-arrhythmic long -QT syndromes (LQTS). Computational analysis (Hancox et al . ) conversely implicates gain of K + channel function involving I Kr , I Ks and I K1 in short -QT syndrome (SQTS). The latter also predispose to atrial and ventricular arrhythmias and SCD. Protein expressional and functional changes related to I Ks have been closely associated with ventricular arrhythmias. Chen et al . reveal a novel role of the ubiquitin-like-modifier leukocyte antigen F-associated transcript 10 (FAT10) in regulating K + channels competing for Kv7.1 ubiquitination. This protects against pro-arrhythmic hypoxia-induced decreases in I Ks . FAT10 itself protects against myocardial ischaemia. Recent pharmacological targeting of a significant number of these novel K + currents includes new non-selective K + channel inhibitors and drugs directed towards the atrial-specific I Kur , I Kr and I KATP . . Ca 2+ homeostasis and excitation–contraction coupling summarizes the significant progress suggesting reciprocal relationships between membrane excitation and excitation–contraction coupling mechanisms ( a–d ). Transverse tubular L-type Ca 2+ current I CaL triggering producing the AP phase 2 plateau ( a,b ) results in extracellular Ca 2+ entry, causing a local cytosolic [Ca 2+ ] elevation in possible Ca 2+ microdomains formed by membranes bounding the transverse tubule–sarcoplasmic reticular, T-SR, junctions . This drives feed-forward ryanodine receptor (RyR2)-mediated sarcoplasmic reticular (SR) Ca 2+ release ( d ). RyRs are additionally regulated by intracellular factors exemplified by the FK506 binding proteins, FKBP12 and FKBP12.6, though their detailed action is debated. Richardson et al . report time- and concentration-dependent effects of FKBP12 on previously FKBP12/12.6-depleted RyR2 channels, suggesting negative co-operativity in their FKBP12 binding, potentially significant in regulating RyR-mediated Ca 2+ signalling. Genetic gain of RyR2 or loss of calsequestrin function is associated with the pro-arrhythmic condition catecholaminergic polymorphic ventricular tachycardia (CPVT) experimentally recapitulated in murine hearts carrying genetically altered RyR2 or calsequestrin-2 . The resulting further bulk cytosolic [Ca 2+ ] elevation ( e ) activates troponin, initiating mechanical activity. Ca 2+ release normally terminates with membrane repolarization. Cytosolic [Ca 2+ ] then returns to its resting level through cardiac SR membrane Ca 2+ -ATPase (SERCA2)-mediated Ca 2+ re-uptake and sequestration by SR calsequestrin, and surface membrane NCX-mediated cytosolic Ca 2+ extrusion into the extracellular space in exchange for extracellular Na + , whose electrogenicity has been implicated in both abnormal rhythm and normal SAN pacing (see §4; Donald & Lakatta ) . The cycles of increase followed by restoration of cytosolic Ca 2+ concentration and therefore of contraction are normally synchronized with membrane events associated with the AP. Alterations in these excitation–contraction coupling processes potentially exert pro-arrhythmic effects . Of feed-back effects on their initiating membrane events ( a ), membrane potential after-depolarization events could elicit triggered activity should their amplitude be sufficient to initiate regenerative Na + or Ca 2+ channel excitation ( b ). First, altered I CaL could predispose to pro-arrhythmic early after-depolarization (EAD) phenomena late in phase 2 or early in phase 3 of the AP, in turn causing extrasystolic membrane excitation. These events typically occur under bradycardic conditions, when altered balances of inward I Na or I Ca and outward I K prolong the AP. This permits I CaL reactivation, which in turn triggers an extrasystolic AP, potentially precipitating torsades de pointes . This is particularly likely under acquired or genetic conditions of increased APD exemplified by experimental hypokalaemia or LQTS . Secondly, elevated diastolic cytosolic [Ca 2+ ] following abnormally increased I CaL or RyR2 Ca 2+ sensitivity can itself trigger propagating waves of spontaneous SR Ca 2+ release asynchronous to the normal membrane excitation cycles, further elevating cytosolic [Ca 2+ ] ( c ). These can result in delayed after-depolarization (DAD) events that follow full AP repolarization. These are driven by transient inward currents, I ti , resulting from an electrogenic NCX activity enhanced by the elevated cytosolic [Ca 2+ ] produced by the abnormal diastolic SR Ca 2+ release . NCX itself may contribute to SAN automaticity through its depolarizing electrogenic effects (see §4; ). Thirdly, Terrar reviews contributions from further intracellular organelles, including lysosomes and mitochondria, to timing and Ca 2+ store-based modulation involving further, cADP-ribose, nicotinic acid adenine dinucleotide phosphate (NAADP) and inositol tris-phosphate (IP 3 )-mediated, signalling to intracellular organelles. These further modulations of Ca 2+ homeostasis may contribute additional arrhythmic mechanisms, often similarly acting through NCX. Fourthly, elevated cytosolic [Ca 2+ ] may also downregulate Na + channel expression and function, compromising AP initiation and/or conduction velocity ( d ). Salvage et al . review this action, likely involving Ca 2+ /calmodulin (Ca 2+ -CaM) and apo-CaM interactions with binding sites on the III–IV linker and the C-terminal domain of Nav1.5 . Such mechanisms appear to operate through a wide range of physiological situations. They could also modify the expression of other ion channels, exemplified by Li et al . in the calmodulin kinase II (CaMKII)-mediated modifications in Ca 2+ -activated K + (SK2) channel expression under conditions of cardiac hypertrophy , in addition to CaMKII actions in increasing I NaL (Liu et al . ) ( e ). Finally, Zhou et al . report a further possible level of RyR2–Na + channel interaction in iPSCs carrying clinically pro-arrhythmic RYR2-A1855D . Their resulting phenotype, with premature spontaneous SR Ca 2+ transients, Ca 2+ oscillations and increased APDs, was accentuated by a co-existent SCN10A-Q1362H variant by itself not conferring any specific phenotype. These advances broadened the potential therapeutic anti-arrhythmic options. Ca 2+ channel blockers can act as non-selective surface membrane Ca 2+ channel inhibitors. There are also phenylalkylamine and benzothiazepine Cav1.2 and Cav1.3 channel-mediated I CaL inhibitors. One RyR2 blocker, flecainide, has found recent use in the monotherapy of CPVT . Future explorations could target (a) further surface membrane L- and/or T-type Ca 2+ channels, (b) intracellular RyR-Ca 2+ channels, (c) SERCA2 activity, (d) ion exchange, particularly Na + –Ca 2+ exchange processes, and (e) phosphorylation levels of cytosolic Ca 2+ -handling proteins, including CaMKII inhibitors, and p21 activated kinase 1 (Pak1) modulators (see §§7 and 9). . Autonomic G-protein-mediated modulation The physiological processes of cardiac pacing, ion current activation in AP generation, and the excitation–contraction coupling that initiates myofilament activity are modulated by the cardiac autonomic, sympathetic and parasympathetic innervation ( f,g ). This releases transmitters and co-transmitters binding to receptors often coupled with guanine nucleotide-binding (G-) proteins. The latter G-protein-coupled receptors (GPCRs) activate regulatory biochemical cascades with complex and multiple inotropic, chronotropic and lusitropic effects upon cardiac function . hiPSC-derived co-culture systems permitting closer examination of neurocardiac interactions are under development. Li et al . report one such optimized system replicating many anatomical and pathophysiological features of both the individual and combined cardiomyocyte and innervating components mimicking physiological responses in other mammalian systems. Sympathetic nervous system terminals are widely distributed through different cardiac regions, where they release noradrenaline ( f ). Sympathetic activation also triggers adrenal medullary adrenaline release into the circulation. Both transmitters bind to surface membrane β 1 - and β 2 -adrenergic receptors. Of these, the cardiomyocytes express β 1 -adrenergic receptors whose activation triggers widespread actions. Noradrenaline binding activates the stimulatory G-protein G s . Its G α subunit binds guanosine triphosphate (GTP) and is released from the receptor and the βγ - subunit. The G α subunit then activates the adenylyl cyclase, enhancing cyclic 3′,5′-adenosine monophosphate (cAMP) production, increasing cellular cAMP levels. First, cAMP combines with, and maintains open, HCN channels, particularly in SAN cells, increasing, pacemaker current I f and heart rate. Secondly, cAMP activates protein kinase A (PKA), which exerts widespread strategic phosphorylation actions. The latter include exciting Nav1.5, Kv11.1 and Kv7.1, respectively, mediating rapid inward I Na and subsequent outward I Kr and I Ks . PKA also enhances phosphorylation of the C-terminal tail regions of Cav1.2 L-type Ca 2+ channels, increasing their open probability, increasing both amplitude and duration of the ventricular AP plateau. It also accelerates SAN pacemaker potentials. The consequent increased net Ca 2+ entry into the cell increases the rate and force of muscle contraction in subsequent beats. PKA-mediated phosphorylation of RyR2 reduces binding of its regulatory ligand FKBP12, which normally stabilizes its closed state. This dissociation increases the Ca 2+ sensitivity of RyR2, enhancing Ca 2+ -induced Ca 2+ release. Secondly, PKA-mediated phosphorylation of phospholamban (PLN) relieves its inhibition of SERCA2-mediated re-uptake of previously released cytosolic Ca 2+ , enhancing diastolic SR Ca 2+ store re-loading. Thirdly, of isoforms of cAMP-dependent exchange proteins directly activated by cAMP (Epac), Epac2 activates CaMKII activity, increasing RyR2-mediated SR Ca 2+ release . Epac1 activation induces programmes of hypertrophic, morphological and cytoskeletal changes. These accompany increased protein synthesis and induction of cardiac hypertrophic markers mediated by Ca 2+ -dependent calcineurin activation. Tomek & Zaccolo describe cellular compartmentation mechanisms in which such diverse cAMP actions might take place. In addition, different sympathetic responses amongst cardiomyocyte types are exemplified by differing electrophysiological properties and responses to noradrenaline of pulmonary vein compared with left atrial cardiomyocytes. These may contribute to atrial ectopy . Parasympathetic, inhibitory, nerve fibre activity slows heart rates and decreases contractile force. The underlying transmitter, ACh, acts through cardiac muscarinic (M 2 ) receptors. ACh–receptor binding activates the coupled G-protein G i2 . These actions occur in SAN, AVN or atrial myocardium in both the presence and absence, but in ventricular tissue only in the presence, of pre-existing adrenergic challenge. The G α subunit binds GTP and splits off from the receptor and its G βγ - subunit. G βγ subunits open inward rectifying I KACh or I KAdo channels particularly in supraventricular tissue, by acting on their GIRK1 and GIRK4 components . This occurs particularly in the SAN but also in atria and ventricles. The dissociated G iα binds to and inhibits adenylate cyclase (AC). This reduces cAMP production in pacemaker cells , resulting in their increased I CaL and I f . G i activation may also upregulate protein phosphatase (PP2A) activity. This likely takes place through a reaction sequence involving cell division control protein 42 homologue (Cdc42)/Ras-related C3 botulinum toxin substrate 2 (rac2) and Pak1. PP2A dephosphorylates PKA-phosphorylated proteins at the same serine/threonine phosphorylation sites. It therefore reverses PKA effects on L-type Ca 2+ channels, RyR2s and the SERCA2a inhibitor PLN. The cardioprotective effects of Pak1 may thus involve increased PP2A activity additional to its potentially strategic remodelling actions discussed in §9 (He et al . ; Jung et al . ) . Recent studies have closely examined its actions in increasing SERCA activity . Finally, adenine nucleotides act as excitatory postganglionic sympathetic co-transmitters on metabotropic P2Y receptors. The resulting adenosine (A 1 ) receptor activation activates phosphokinase C (PKC) through phospholipase C-mediated production of diacylglycerol. PKC acts on voltage-gated Na + and K + channels, L-type Ca 2+ channels and RyR2. These G-protein-linked systems show significant amplification. Activating a single β-adrenergic receptor activates many G-proteins. Each then activates an enzyme molecule, in turn producing many cAMP molecules. Each activated PKA molecule then phosphorylates several Ca 2+ channels. Correspondingly, activating one muscarinic receptor produces many G βγ subunits. This opens many GIRK1 channels. Closer characterization of such signalling pathways in iPSC cells is a relatively new area of study. Ahmad et al . describe differentiated human iPSCs resembling an atrial phenotype, with the expected electrophysiological and Ca 2+ signalling properties, and specific transcripts, responsive to adrenergic stimulation, therefore permitting studies of such effects. Recent results implicate a normal continuous diurnal ion channel remodelling at the level of SAN pacemaking driven by sympathetic, though not parasympathetic, actions coupling central nervous system suprachiasmatic nuclear circadian rhythms to rhythms within the heart itself. These actions were initially attributed to beat-to-beat autonomic transmitter-mediated modulation of specific ion channel activity . A greater adrenal medullary catecholamine release and cardiac catecholamine content might then explain higher awake than asleep resting heart rates . However, recent evidence implicates a periodic transcriptional cardiac remodelling varying ion channel abundances and their consequent ionic current densities in such diurnal heart rate variations. Anderson et al . discuss this particularly for the HCN channel, exploring possible mechanisms for these findings. About 44% of the sinus node transcriptome, including many important cardiac ion channels, displays a circadian rhythm . This non-canonical sympathetic action was reflected in chronic but not acute pharmacological autonomic blockade inhibiting both this circadian rhythm and the related ion channel transcription . This could involve cAMP response element action promoting the key clock genes, such as Per1 and Per 2.18 . The elaboration of adrenergic and cholinergic cardiac actions through fuller understanding of G-protein signalling allows the original Vaughan Williams Class II to be broadened to include G-protein actions in general. These have translated to therapeutic advances in the form of new selective and non-selective adrenergic antagonists, as well as adenosine receptor and cholinergic muscarinic receptor modulators . Possible future potential targets may arise from the numerous (approx. 150) further orphan GPCRs. There are now new non-selective, β-, and selective β 1 -adrenergic receptor inhibitors, muscarinic M 2 receptor inhibitors and activators, and adenosine A 1 receptor activators. . Cardiomyocyte energetics and excitable properties More recently reported processes affecting longer-term cellular energetics and tissue structure remodelling are also implicated in cardiac arrhythmias. These actions complement the more established acute effects of specific ion channels described above. They are often associated with hypoxic conditions generally , hypertrophic or fibrotic change, cardiac failure, ischaemia-reperfusion and biochemical conditions including obesity, insulin resistance and type 2 diabetes . The resulting oxidative stress and longer-term structural, fibrotic, hypertrophic and inflammatory, changes occur upstream of the membrane-level electrophysiological processes . Normal cardiomyocyte function in human hearts depends on a number of energy-intensive processes consuming kilogram ATP quantities daily. Approximately 30–40% of this cellular ATP is expended maintaining ionic gradients and efficient Ca 2+ cycling ( a,b ). Approximately 90% of the ATP consumption is replenished by the extensive cardiomyocyte mitochondrial network . Arrhythmic disorders, particularly AF, have been associated with the metabolic stress associated with metabolic syndrome . Animal models show abnormal mitochondrial structure early following AF induction . Cardiomyocyte mitochondria from human AF patients show increased DNA damage, structural abnormalities and evidence of impaired function . Atrial tissue from chronic AF patients also shows altered transcription of mitochondrial oxidative phosphorylation-related proteins . Decreased mitochondrial complex II/III activity has been reported in permeabilized atrial fibres from patients who developed post-operative AF, corresponding to decreased expression of the gene cluster for mitochondrial oxidative phosphorylation . Finally, right atrial tissue from cardiac surgery patients with an AF history also demonstrated downregulated electron transport chain activity and proton leakage . Mitochondrial dysfunction destabilizes the inner membrane potentials required to drive the electron transport chain, compromising ATP generation. The consequent ATP depletion or rising adenosine diphosphate (ADP) first increases opening probabilities of sarcolemmal K-ATP (sarcKATP) channels . This shortens APDs and consequently the ERPs, predisposing to re-entrant arrhythmia . It hyperpolarizes cell membrane potentials, compromising cell excitability and AP propagation ( c ). Secondly, excessive energetic demand, compromised vascular oxygen supply or pathological energetic disorders associated with mitochondrial dysfunction also increase reactive oxygen species (ROS) production. The normally occurring low ROS levels modulate activity in a range of signalling molecules or signal themselves. These either transiently alter the activity of proteins, or produce more sustained effects through altering transcription factors and gene expression. ROS influence cardiomyocyte excitability, and atrial and ventricular arrhythmic tendency, effects reduced by allopurinol or ascorbate antioxidant challenge. Increased ROS production could underlie shortened atrial ERPs and initiation of AF with rapid pacing . Right atrial appendages of AF patients show increased markers of oxidative stress . Dysregulated ROS production may also reduce cardiac Na + channel expression . In addition, reduced (NADH) or oxidized nicotinamide adenine dinucleotides (NAD + ), reflecting cell oxidative state, respectively inhibit and enhance Nav1.5 activity, despite normal overall Nav1.5 expression, affecting AP conduction . ROS also reduce connexin-43 (Cx43) trafficking and function and the consequent cell–cell coupling . Oxidative stress may also influence cardiomyocyte I K , sarcolemmal K ATP channels and I Ca . Thirdly, oxidative stress may also influence Ca 2+ homeostasis. ROS oxidize RyR2, increasing SR Ca 2+ leak, increasing cytosolic [Ca 2+ ] i . It thus altered intracellular Ca 2+ cycling in ageing rabbit ventricular myocytes, its effects reversed by a mitochondrial specific ROS scavenger . Oxidative stress also reduces SERCA-mediated Ca 2+ re-uptake . CaMKII may also be redox-sensitive, with oxidation resulting in kinase activity similar to auto-phosphorylated CaMKII : pharmacological CaMKII inhibition prevented H 2 O 2 -induced ventricular arrhythmias . ROS also oxidize and activate PKA . Finally, ROS may be linked to cardiac fibrosis through fibroblast activation and production of transforming growth factor-β (TGF-β) (§9) . Finally, both CaMKII and ROS could increase I NaL (Liu et al . ) Several transcriptional coactivators regulate mitochondrial mass and function ( d ) . Of these, the peroxisome proliferator activated receptor (PPAR) γ coactivator-1 (PGC-1) family, including PGC-1α and PGC-1β, is highly expressed in oxidative tissues, including heart, brain, skeletal muscle and kidney. Either PGC-1α or PGC-1β suffices to activate gene regulatory programmes increasing cellular energy production capacity. PGC-1 protein expression increases with a number of upstream signals linking cellular energy stores and external stimuli including cold exposure, fasting and exercise, matching mitochondrial activity to cellular energy requirements. PCG-1s act through numerous nuclear receptor targets including PPARα, PPARβ and oestrogen-related receptor alpha (ERRα). PGC-1α also coactivates nuclear respiratory factor-1 (NRF-1) and -2 (NRF-2) . The latter modulate expression of the nuclear-encoded transcription factor Tfam, essential for replication, maintenance and transcription of mitochondrial DNA . They also regulate expression of other proteins required for mitochondrial function, including respiratory chain subunits . PPARα is also a key regulator of genes involved in mitochondrial fatty acid oxidation. ERRα is an important regulator of mitochondrial energy transduction pathways, including fatty acid oxidation and oxidative phosphorylation . In cardiac cells, PCG-1α interaction with NRF-1, ERRα and PPARα also increases mitochondrial biogenesis . Forced PGC-1 expression in cultured cardiomyocytes induced expression of nuclear genes encoding mitochondrial proteins involved in other energy production pathways, including the tricarboxylic acid cycle, and nuclear and mitochondrial genes encoding components of the electron transport chain and oxidative phosphorylation complex . PGC-1 proteins, through these interactions, thus exert multi-level regulation of cellular mitochondrial function and metabolism as a whole. PCG-1s fall in obesity, insulin resistance, type II diabetes mellitus and ageing in parallel with mitochondrial dysfunction . Mice deficient in both Pgc-1α and Pgc-1β develop a low cardiac output state and conduction system disease, dying before weaning . Ablating either PCG-1α or PGC-1β produces a milder phenotype, permitting physiological study. Pgc-1α −/− hearts have normal baseline contractile function but develop cardiac failure with increased afterload . Pgc-1β −/− hearts showed similarly normal baseline features but blunted heart rate responses compared with WT hearts following adrenergic challenge . They also showed an increased arrhythmic propensity. Langendorff-perfused Pgc-1β −/− hearts demonstrated APD alternans, and more frequent episodes of VT in response to programmed electrical stimulation . Single-cell studies revealed alterations in the expression of a number of ion channels as well as evidence of spontaneous diastolic Ca 2+ transients, previously associated with pro-arrhythmic after-depolarizations Chronic studies of the effects of mitochondrial impairment on the development of pro-arrhythmic phenotypes compared young (12–16 weeks) and aged (older than 52 weeks) Pgc-1β −/− mice with aged-matched WT. Chronotropic incompetence in intact animals suggested SND and a paradoxical negative dromotropic response suggested AVN dysfunction, following β 1 -adrenergic challenge . Sharp microelectrode AP recordings in both atria and ventricles of Langendorff-perfused Pgc-1β −/− hearts during programmed electrical stimulation demonstrated arrhythmic phenotypes progressing with age. This accompanied reduced (d V /d t ) max , prolonged AP latencies, reduced APD, and a consequently reduced AP wavelength ( λ ) correlating with Pgc-1β −/− arrhythmogenicity . These findings could be accounted for by loose patch-clamp demonstrations of reduced I Na but not of I K in Pgc-1β −/− atria and ventricular preparations . Finally, the Pgc-1β −/− hearts showed accelerated fibrotic change with age (see §9; ). . Cardiac remodelling and excitable properties Remodelling of molecular and physiological processes as well as of cardiac structure can occur over all timescales, and involve any cardiac region(s). There have been recent suggestions implicating non-canonical sympathetic actions in normal diurnal variations in ion channel expression (§7). SAN pacemaking can also be remodelled in disease. Logantha et al . report altered SAN ion channel-, Ca 2+ -handling- and fibrosis-related gene expression and implicate these in the SAN dysfunction in a rat pulmonary arterial hypertension model. Investigations of detailed mechanisms are in their infancy. He et al . review one line of investigation exploring possible protective signalling actions of PAK1 possibly through altering Cav1.2/Cav1.3 ( I CaL )-mediated Ca 2+ entry, RyR2-mediated SR Ca 2+ release and CaMKII-mediated transcriptional regulation of SERCA2a and NCX. Conversely, Jung et al . demonstrate that PAK1 deficiency promotes atrial arrhythmogenesis under adrenergic stress conditions, likely through posttranslational and transcriptional modifications of key molecules, including RyR2 and CaMKII, critical to Ca 2+ homeostasis. Longer-term cardiac remodelling involving anatomical, fibrotic and/or hypertrophic change can also occur in cardiac disease processes. The nature of their possible mechanisms are here exemplified by a simplified summary of angiotensin II (AngII) action through its angiotensin receptor type 1 (ATR 1 ) . Although classically implicated in systemic blood pressure regulation and Na + and H 2 O homeostasis, ATR 1 activation also stimulates the inflammatory cell recruitment, angiogenesis, cellular proliferation, and accumulation of extracellular matrix (ECM) associated with cardiac hypertrophy and fibrosis . These actions may involve a local cardiac renin–angiotensin system (RAS) thought also to exist in other organs, including blood vessels, brain, kidney, liver and skin. Tissue RASs are functionally autonomous systems of known importance in fibrotic change. They also exert longer-term actions on surface electrophysiological ( a,b ) and Ca 2+ homeostatic activity ( c ), through potential actions of fibrotic and hypertrophic change on AP conduction ( d ). ATR 1 s act through both G-protein-, G α q/11 , G α 12/13, and G β y , and non-G-protein-related signalling pathways ( e ), then on multiple, oxidase and kinase signalling pathways ( f ). These include the serine/threonine kinases CaMKIII and protein kinase C (PKC), and the mitogen-activated protein kinases (MAPK) extracellular signal-regulated protein kinase 1/2 (ERK1/2), c-Jun NH 2 -terminal kinase (JNK) and p38 mitogen-activated protein kinases (p38MAPK). Signalling can also involve receptors, including platelet-derived growth factor (PDGF), epidermal growth factor receptor (EGFR) and insulin receptors, and the non-receptor tyrosine kinases Src, Janus kinase/signal transducer and activator of transcription IL (JAK/STAT) and focal adhesion kinase (FAK) . ATR 1 -mediated NAD(P)H oxidase activation following PKC activation leads to ROS generation, implicated in cardiomyocyte hypertrophy . The PKC activation also mediates a galectin-3-dependent fibrosis in HL-1 cells. AngII- or ROS-mediated CaMKII activation, in addition to enhancing phosphorylation of protein targets related to excitation–contraction coupling and cell survival, also did so for transcription factors driving hypertrophic and inflammatory gene expression . Activation of the MAPKs, ERK1/2, p38MAPK and JNK, has been implicated in cell growth and hypertrophy . It is also implicated in cardiac fibrosis through increasing gene transcription for procollagen I, procollagen III and fibronectin, and TGF-β, with TGF-β also directly activated by AngII-ATR 1 binding. The family of TGFs in turn critically regulates tissue homeostasis and repair, immune and inflammatory responses, ECM deposition, and cell differentiation and growth . TGF-β1, expressed in almost all tissues, is the most prevalent member. TGF-β1 overexpression, acting through Smad, and non-canonically and synergistically through ERK1/2, JNK, and p38MAPK, MAPK signalling, is a key contributor to fibrosis in most tissues . TGF-β1 stimulates myofibroblast differentiation and synthesis of ECM proteins and their preservation, by inhibiting matrix metalloproteinases (MMPs) and inducing synthesis of tissue inhibitor metalloproteinases (TIMPs) . TGF-β1 has been demonstrated to induce fibroblast proliferation, in turn leading to atrial fibrosis , SND and AF . AngII acts both by itself and in synergy with TGF-β1 to induce fibrosis ; its fibrogenic effects also have been linked to its activation of TGF-β1 signalling . Amongst non-receptor tyrosine kinases, JAK-STAT signalling has been implicated in cardiac hypertrophy and remodelling under conditions of pressure overload and ischaemic pathology . Langa et al . discuss emerging data implicating upregulated Notch signalling elements, particularly in hypertrophic (HCM) and dilated cardiomyopathy (DCM), conditions potentially constituting future therapeutic targets in their own right, in variant cTnT-I79N +/− hiPSC-CM cells. Fibrotic change could be implicated in AF through its action in reducing, and increasing heterogeneities in, AP conduction velocity, and affecting the integrity of AP propagation wavefronts has been implicated in AF. AF also accompanies some Na + channelopathies . Therapeutic exploration within this area has thus far targeted remodelling processes rather than their consequent electrophysiological properties. This is exemplified by now-available angiotensin-converting enzyme and angiotensin receptor blockers, aldosterone receptor antagonists, 3-hydroxy-3-methyl-glutaryl-CoA reductase inhibitors (statins), and n -3 (ω−3) polyunsaturated fatty acids . Nevertheless anti- arrhythmic drugs in this class may be possible . Thus, the key cardiomyocyte regulator of ion channel activity, Ca 2+ homeostasis and cardiac contractility , PAK1 may offer cardioprotective actions through inhibiting maladaptive, pro-arrhythmic, hypertrophic remodelling and progression in cardiac failure , actions of possible therapeutic utility (He et al . ; see also §§6 and 7). . Cycles of physiological discovery and their clinical translation The developments outlined here extend Weidmann's initial key electrophysiological studies and Vaughan Williams's classification of cardiac drugs and physiological and therapeutic targets, and have resulted in the development of novel, therapeutic classification schemes. The updating by a Working Group of the European Society of Cardiology provided a more complete, flexible pathophysiological framework predicting pro-arrhythmic circumstances, often termed the Sicilian Gambit . However, this did not seek or find extensive use as a formal classification scheme. A more recent reclassification of pharmacological targets and anti-arrhythmic agents related the more recently characterized ion channels, transporters, receptors, intracellular Ca 2+ - handling and cell-signalling molecules to their physiological, and potential and actual therapeutic actions. These were organized by strategic aspects of cardiac electrophysiological function paralleling the coverage in this Phil. Trans. B theme issue ( a (ii), b ). In so doing it was possible also to classify both existing and potential cardiac drugs and currently acceptable and potential sites of drug action. This classification also sought to facilitate future developments of investigational new anti-arrhythmic drugs. It expanded and updated established Singh–Vaughan Williams classes, in particular introducing target classes encompassing the longer-term processes in §§8 and 9. It added to Class I I NaL components with implications for long QT syndrome type 3 (LQTS3). A broadened Class II more fully dealt with G-protein signalling, and an expanded Class III incorporated subsequently discovered K + channel subtypes. A much increased Class IV encompassed recent findings on Ca 2+ homeostasis and excitation–contraction coupling. New classes recognized SAN automaticity (Class 0), and mechanically sensitive (Class V) and gap junction channels (Class VI), and longer-term energetic changes and structural remodelling (Class VII). The revised scheme thus provided a simple working model for cardiomyocyte function in which arrhythmia followed abnormal cardiac electrophysiological activation, linking particular therapies with then-known mechanistic targets (referenced in ). The physiological sciences have long worked in a succession of cycles involving mutually reinforcing interactions between laboratory and clinic. Identification of a clinical problem, particularly its aetiology, epidemiology, diagnosis, and natural history, or of novel physiological phenomena, prompts development of experimental models for the related disease process. These could augment mechanistic and clinically translatable understanding currently incomplete even for common and important arrhythmic conditions such as AF (Hu et al . ). The resulting physiological insights would prompt clinical tests and explorations for management and treatment. In turn, feedback of the outcomes of these continues the iterative cycles of experimental and clinical testing, activities currently termed translational medicine, for which some current efforts have been recently summarized (see supplementary file in ). The particular cycle of efforts represented in this present issue might then prompt further attempts at usefully determining physiological targets for investigational new drugs and other interventions directed at cardiac arrhythmic disease.
Increasing associative plasticity in temporo-occipital back-projections improves visual perception of emotions
65432560-c216-431f-83c7-df04a64b72ca
10517146
Physiology[mh]
Humans excel in perception of emotion from other people’s facial expressions, an ability fundamental for effective social interactions and linked to situations ancestrally relevant for survival . Traditionally, neuroscience research has documented enhanced responses to facial expressions in visual areas , . However, this activity has often been interpreted as consequential, rather than integral, to emotional appraisal, which has been assumed to occur elsewhere in the brain (e.g., in the limbic system) , . Yet, recent evidence indicates that sensory representations, measured as patterns of activity confined within the visual system, can be sufficient to accurate perception of emotions – . Functional and connectional properties converge toward a specialized and partly segregated pathway for facial expression recognition that begins in the early visual cortex (V1/V2), and terminates downstream in the posterior banks of the superior temporal sulcus (pSTS) , . For example, activity in V1/V2 and pSTS selectively responds to reward signals, predicts category-specific emotion perception, and encodes various affective states according to several gradients , . Face-selective patches of pSTS receive projections directly from the periphery of V1/V2 , and through polysynaptic connections with intermediate stations that include the occipital face area (OFA) and middle temporal motion-processing area (V5/MT+) , . In turn, pSTS sends direct and indirect back-projections to V1/V2 , . Consistent with these reciprocal connections, pSTS and V1/V2 also show intrinsic functional connectivity at rest , . This functional coupling is predictive of inter-individual differences in emotion recognition accuracy , and increases during the perception of emotional expressions . However, a causal explanation of how pSTS and V1/V2 coordinate to support efficient emotion perception remains elusive and requires characterization of the network along several dimensions. In this context, two critical elements of qualification concern the directionality and timing of information flow. Rather than a feedforward readout and linear integration of visual information along the cortical hierarchy, current models of visual awareness and perceptual decision-making assign increasing relevance to reentrant projections and recursive processing as general principles of visual recognition – . Accordingly, information transmission is reciprocal between adjacent stages and, in most cases, backward projections broadly outnumber forward projections . Transcranial magnetic stimulation (TMS) has proven to be an ideal tool for probing the timing and function of feedback activity in the visual system. In fact, TMS can assess the causal impact of regional cortical activity on specific perceptual functions with millisecond precision. Moreover, because the neural activity induced by TMS spreads to anatomically connected regions , , cortico-cortical information flow can be traced and temporal dynamics investigated. For example, seminal TMS studies have targeted back-projections from V5 to V1 in an early time window of approximately 40 ms and have demonstrated that they are necessary for visual awareness of motions , . Another avenue of inquiry concerns the malleability of these reentrant connections and whether plasticity can be recruited to improve visual perception. A TMS protocol, named cortico-cortical paired associative stimulation (ccPAS), can effectively strengthen synaptic connections and induce Hebbian plasticity that critically depends on both the direction and the timing of connectivity – . The ccPAS protocol involves the repeated pairing of TMS pulses over two brain areas with an interstimulus interval (ISI) consistent with the propagation of signals from the ‘pre-synaptic’ to the ‘post-synaptic’ target nodes – . This stimulation determines spike timing-dependent plasticity (STDP) – that is associated with changes in the strength of effective cortico-cortical connectivity between targeted areas, as shown by physiological assays addressing motor areas – . Notably, recent studies have applied ccPAS to the visual system, showing that strengthening reentrant connections from V5/MT to V1/V2 with an optimal timing of 20 ms between pulses has a transient impact on perceptual judgments, as it leads to enhanced detection of motion coherence, evident between 30 and 60 minutes after the stimulation – . However, there is currently no evidence that similar short-term plastic changes can be induced in brain areas like pSTS that are traditionally assigned to the ventral visual stream. Moreover, pSTS has been recently proposed as the terminal site of a third temporo-occipital pathway specialized for social perception , encompassing projections from early visual cortex (V1/V2) via motion-selective areas (MT+/V5) , – . To address this issue, in the present study we tested the relevance and functional selectivity of back-projections from pSTS to V1/V2 in the perception facial expressions. In a series of experiments, we provided causal evidence that 1) reentrant projections from pSTS to V1/V2 are functionally malleable with ccPAS; 2) exogenous strengthening of back- projections boosts sensitivity to facial expressions under noisy and difficult perceptual conditions; 3) this behavioral effect induced by ccPAS dovetails with enhanced electrophysiological activity in the pSTS-V1/V2 network in response to facial expressions, with maximal activity over V1/V2; 4) these plastic changes critically depend on the directionality and physiologically-defined timing of brain connectivity; and 5) they do not extend to other perceptual judgments, such as perception of gender under identical experimental conditions. Experiment 1 combined TMS and EEG to assess the temporal profile of signal propagations from pSTS to V1/V2 , , and identified 200 ms as the optimal timing to mimic STDP and respect the Hebbian principle of consequentiality – . Experiment 2 exploited this knowledge to devise a time-resolved ccPAS protocol tailored for pSTS-to-V1/V2 reentrant connections. We showed that transient enhancement of emotion perception is contingent upon a stimulation interval of 200 ms between the two TMS pulses, as it disappeared with different intervals or when pSTS and V1/V2 were stimulated synchronously. Experiment 3 addressed direction-specificity and showed that improvements in emotion recognition do not occur when feedforward connections between V1/V2 and pSTS are stimulated, or when sham stimulation is delivered. Experiment 4 tested functional specificity, applying the same ccPAS protocol with a control task matched for difficulty and requiring participants to discriminate gender instead of facial expressions. Finally, Experiment 5 measured event-related potentials (ERPs) to examine the electrophysiological correlates of improved perception following the critical ccPAS manipulation. After ccPAS, early ERPs elicited by facial expressions (i.e., the P1 component) were enhanced in amplitude. Consistent with the Hebbian principle, the neuronal generators of this enhanced P1 amplitude were maximally expressed over the V1/V2 site, where TMS activations converged due to ccPAS targeting of pSTS-to-V1/V2 projections. A total of 155 healthy young adults were recruited in 5 experiments and randomly assigned to 11 groups according to the specific TMS protocol administered and the task they were asked to perform. Experiment 1: Tracking signal propagation from pSTS to V1/V2 A first TMS-EEG co-registration study was designed to track signal propagation from pSTS to V1/V2 and estimate its timing. To this aim, we administered active and sham single-pulse TMS over the right pSTS in a group of 10 participants while EEG signals were continuously recorded. We analyzed the time-course of TMS-evoked responses at the sensor level (occipital electrodes O1, Oz, O2) and the source level in a region of interest (ROI) centered over the right occipital pole, corresponding to the location of V1/V2. Following pSTS stimulation, the maximal EEG peak of TMS-evoked activity was recorded from occipital electrodes (Fig. ) and the V1/V2 ROI (Fig. ) after ~200 ms, consistent with the recruitment of long-range and polysynaptic reentrant temporo-occipital connections. Smaller and short-lasting activations were also observed in both V1/V2 and pSTS after ~100 ms, but they were not temporally specific or clearly distinct from activations observed over the stimulated pSTS itself (Supplementary Fig. ). Based on these findings, in Experiments 2–5 we selected 200 ms as the critical ISI for targeting pSTS-to-V1/V2 back-projections, thus devising a novel long-latency(200 ms) ccPAS protocol (Supplementary Fig. ). An ISI of 100 ms was used as a control for testing the protocol’s temporal specificity. Experiment 2 – Time-specific activation of the pSTS-to-V1/V2 pathway enhances visual perception of emotions from facial stimuli In Experiment 2, we tested whether a long-latency ccPAS protocol aimed at strengthening pSTS-to-V1/V2 back-projections modulates the ability to perceive emotional expressions under challenging and noisy perceptual conditions. Forty-two participants were randomly assigned to three different stimulation groups, according to the temporal properties of the ccPAS protocol (Fig. ). The Experimental group (Exp2 STS-V1 ), underwent a ccPAS protocol repeatedly activating the pathway connecting pSTS and V1/V2; this protocol involved the administration of 90 pairs of TMS pulses, with the first pulse of each pair targeting the right pSTS and the second pulse targeting V1/V2 after a 200-ms ISI. Based on Experiment 1, pSTS-V1/V2 stimulation with this ISI was expected to induce STDP in temporo-occipital back-projections. Two control ccPAS conditions similarly targeted pSTS and V1/V2 but the ISI between the pulses was manipulated to prevent induction of STDP. In the first control group (Ctrl 100ms ), we set the ISI between pSTS and V1/V2 stimulation to 100  ms, whereas in the second control group (Ctrl 0ms ), the TMS pulses were delivered simultaneously (i.e., at an ISI of 0 ms; see Methods for details). Participants in all groups were asked to perform the same emotion perception task (Fig. ), whereby faces showing expressions associated with happiness and fear were briefly presented in a sandwich masking procedure at three different exposure durations: 17, 33, or 50 ms (Fig. ). The task was administered before undergoing the assigned ccPAS protocol (i.e., at baseline), immediately after the ccPAS (T0), and again at 20, 40, 60, and 80 min following the ccPAS procedure (T20-T80; Fig. ). A preliminary control analysis ensured that performance at baseline was comparable across all groups in Experiment 2, as well as in the other experiments (Supplementary Table , , and ). The ccPAS (Exp2 STS-V1 , Ctrl 0ms , Ctrl 100ms ) × Exposure time (17, 33, 50 ms) × Time from ccPAS (T0, T20, T40, T60, T80) ANOVA on baseline-corrected d’ values showed a non-significant main effect of ccPAS ( F 2,39 = 2.71; p = 0.079) and a significant ccPAS × Exposure time interaction ( F 4,78 = 2.56; p = 0.045; η p 2 = 0.12; see Fig. ). Post-hoc analysis showed that expression recognition improved following the ccPAS protocol in the most difficult condition — i.e., when faces were briefly displayed for 17 ms — but only for participants assigned to the experimental group (Exp2 STS-V1 ). The improvement observed in the experimental group at the 17-ms exposure time was greater than in any other condition (all p ≤ 0.044; all Cohen’s d ≥ 0.54; black asterisks in Fig. ); there were no other differences between groups or exposures (all p ≥ 0.11). A further two-tailed t -test showed that d’ values for expression recognition at an exposure timing of 17 ms were statistically higher following Exp2 STS-V1 ccPAS compared to the pre-ccPAS baseline ( t 13 = 4.39; p < 0.001; Cohen’s d = 1.17; red asterisks in Fig. ). No other ANOVA main effects or interactions reached significance (all F ≤ 1.91; all p ≥ 0.13), including the 3-way interaction ( F 8.4,163.0 = 1.05; p = 0.37). This indicates that the increased sensitivity, contingent upon Exp2 STS-V1 ccPAS, was comparable across post-ccPAS time points and lasted for at least 80 min. Finally, increased sensitivity to briefly presented emotional faces was not due to changes in decision criteria or speed/accuracy trade-offs, as we observed no effect of ccPAS on response bias (β) or response times (RTs) (Supplementary Table ). Experiment 3–Direction-specific activation of the pSTS-to-V1/V2 pathway enhances visual perception of emotion from facial stimuli Experiment 3 investigated the directional specificity of the neurostimulation protocol. Thirty-nine new participants performed the same task and underwent the same general procedure used in Experiment 2 and were evenly assigned to three groups based on the ccPAS protocol (Fig. ). The Experimental group (Exp3 STS-V1 ) was subjected to the same ccPAS protocol described in Experiment 2, i.e., first pulse over pSTS and second pulse over V1/V2 at the critical 200-ms ISI. In the first control group (Ctrl V1-STS ), we reversed the order of the two TMS pulses: the first pulse of each TMS pair was delivered to V1/V2 and the second pulse to pSTS using the same 200-ms ISI, to potentially target feedforward connections from V1/V2 to pSTS. In a second control group (Ctrl Sham ), the ccPAS protocol was delivered using the same parameters as in the experimental condition but with the coil tilted at 90 degrees, thereby preventing the induction of currents in the brain. The ccPAS (Exp3 STS-V1 , Ctrl V1-STS , Ctrl Sham ) × Exposure time (17, 33, 50  ms) × Time from ccPAS (T0, T20, T40, T60, and T80) ANOVA on baseline-corrected d’ values showed a main effect of Exposure time ( F 1.7,61.9 = 4.32; p = 0.02; ηp 2 = 0.11) and, more importantly, a significant ccPAS Exposure × Time interaction ( F 3.4,61.9 = 2.64; p = 0.05; η p 2 = 0.13; Fig. ). We replicated the main results of Experiment 2, with a selective improvement in expression recognition only in the group subjected to pSTS-to-V1/V2 ccPAS at a 200-ms ISI (Exp3 STS-V1 ) and during short stimulus exposures (17 ms) relative to the other conditions (all p ≤ 0.037; all Cohen’s d ≥ 0.67; black asterisks in Fig. ). Notably, post-hoc analyses showed no changes in face recognition when feedforward connections were targeted (Ctrl V1-STS ; all p ≥ 0.17) or when sham stimulation was administered (Ctrl Sham ; all p ≥ 0.16). A two-tailed t -test showed that d’ values for perception of facial expressions displayed for 17 ms were higher following Exp3 STS-V1 ccPAS compared to baseline levels ( t 12 = 3.80; p =  0.003; Cohen’s d = 1.05; red asterisks in Fig. ). No other main effects or interactions reached significance (all F ≤ 1.55; all p ≥ 0.17). Once again, in Experiment 3, changes in sensitivity were not due to shifts in decision criteria or to speed/accuracy trade-offs (Supplementary Table ). Experiment 4 - Functional specificity of pSTS-to-V1/V2 back-projections Experiment 4 tested the functional specificity of pSTS-to-V1/V2 ccPAS. We substituted the emotion perception task with a gender perception task, while keeping the stimuli and ccPAS protocols identical (Fig. ). As previously established – , the gender perception task involves the ability to process morphological facial features and relies on ventral occipito-temporal face areas, rather than pSTS. Twenty-eight new participants were evenly assigned to two different groups. One gender task group was subjected to the same active stimulation of pSTS-to-V1/V2 back-projections at a 200-ms ISI that proved effective at enhancing emotion perception in the previous experiments (Ctrl-Gender STS-V1 ), while another control group received sham stimulation (Ctrl-Gender Sham ) (Fig. ). We found no evidence that the same ccPAS protocol targeting pSTS-to-V1/V2 connections, which previously enhanced perception of emotions, also modulates perception of gender. In fact, the ccPAS condition (Ctrl-Gender STS-V1 , Ctrl-Gender Sham ) × Exposure time (17, 33, 50  ms) × Time from ccPAS (T0, T20, T40, T60, T80) ANOVA showed no significant main effects or interactions (all F ≤ 2.31; p ≥ 0.08; Supplementary Table ). In addition, we found no effect of ccPAS on β or RTs (Supplementary Table ). To further assess the functional specificity of pSTS-to-V1/V2 back-projections, we directly compared the three groups that received the same pSTS-to-V1/V2 ccPAS protocol but performed either the emotion perception task (in Experiments 2 and 3) or the gender perception task (in Experiment 4). The Experiment (Exp2 STS-V1 , Exp3 STS-V1 , Ctrl-Gender STS-V1 ) × Exposure time (17, 33, 50 ms) × Time from ccPAS (T0, T20, T40, T60, T80) ANOVA on baseline-corrected d’ values showed significant main effects of Experiment ( F 2,38 = 5.53; p = 0.008; η p 2 = 0.23) and Exposure time ( F 2,76 = 11.27; p < 0.001; η p 2 = 0.23) and a significant Experiment × Exposure time interaction ( F 4,76 = 3.16; p = 0.02; η p 2 = 0.14; Fig. ). Importantly, d’ values increased at 17-ms exposure times for emotion perception, but not for gender perception under identical neurostimulation and visual presentation conditions. In both Exp2 STS-V1 and Exp3 STS-V1 , d’ values for emotion perception increased at 17-ms exposure times compared to the other exposure durations (all p ≤ 0.034; all Cohen’s d ≥ 0.54; black asterisks in Fig. ); moreover, these increases in d’ were larger than the comparable values for gender perception in the Ctrl-Gender STS-V1 group (all p ≤ 0.03; all Cohen’s d ≥ 0.86). The increase in d’ for emotion perception was comparable across Experiments 2 and 3 ( p = 0.41). No other effects were observed (all F ≤ 1.26; all p ≥ 0.27). Experiment 5 - Electrophysiological correlates of improved perception following activation of the pSTS-to-V1/V2 pathway with ccPAS Experiment 5 integrated ERPs to investigate the electrophysiological correlates of improved visual perception of emotions induced by long-latency ccPAS (Figs. and ). Thirty-six new participants were randomly assigned to the Experimental group (Exp5 STS-V1 ) targeting pSTS-to-V1/V2 back-projections with the critical 200-ms ISI, or a Control group (Ctrl V1-STS ) in which we reversed the order of the two TMS pulses, as in Experiment 3. To prevent EEG activity due to the initial presentation of scrambled picture, the first projected image was a face (i.e., sandwich masking was simplified to backward masking; Fig. ). Moreover, we concentrated on the most relevant conditions: face-evoked ERPs were recorded during an emotion perception task at baseline (pre-ccPAS) and right after ccPAS (T0), and for the shortest stimulus exposure (17 ms), which was the only presentation condition previously found to be affected by ccPAS. All other aspects of the behavioral task remained identical to the previous experiments. Figure shows the results of Experiment 5. Behavioral findings replicated the perceptual improvement observed in prior experiments. Indeed, an ANOVA with the factor ccPAS (Exp5 STS-V1 , Ctrl V1-STS ) on baseline-corrected d’ values showed a greater increase in Exp5 STS-V1 compared to Ctrl V1-STS ( F 1,33 = 7.64; p = 0.009; η p 2 = 0.19; black asterisks in Fig. ). We also observed significantly higher d’ values following Exp5 STS-V1 ccPAS compared to pre-ccPAS baseline levels ( t 17 = 4.41; p < 0.001; Cohen’s d = 1.04; red asterisks in Fig. ). See Supplementary Table for RTs and β. We investigated the influence of ccPAS on three key ERP components (P1, N170, P2) recorded over the right (stimulated) and the left (unstimulated) hemispheres (Fig. ; Supplementary Table ). The ccPAS selectively affected the P1 component recorded over the targeted hemisphere. A ccPAS (Exp5 STS-V1 , Ctrl V1-STS ) x Electrode cluster (Left, Right) ANOVA on baseline-corrected P1 amplitudes showed a main effect of Electrode cluster ( F 1,34 = 10.53; p = 0.003; η p 2 = 0.18), no main effect of ccPAS ( F 1,34 = 0.63; p = 0.43), but, importantly, a ccPAS × Electrode interaction ( F 1,34 = 7.24; p = 0.01; η p 2 = 0.18; Fig. ). Following Exp5 STS-V1 ccPAS, we observed an enhancement of P1 amplitudes over the right electrode cluster (PO8, P8, P6) relative to the other conditions (all p ≤ 0.03; all Cohen’s d ≥ 0.69; black asterisks in Fig. ). No modulation was observed in the Ctrl V1-STS group between clusters ( p = 0.70). A two-tailed t -test showed that P1 amplitudes in the right cluster – corresponding to the targeted hemisphere during ccPAS – were higher following Exp5 STS-V1 ccPAS compared to baseline levels ( t 17 = 3.77; p = 0.002; Cohen’s d = 0.89; red asterisks in Fig. ). Scalp maps show that the ERP enhancement in the P1 time window following Exp5 STS-V1 ccPAS, but not Ctrl V1-STS , was mostly localized over right posterior electrodes (Fig. ). Source estimation in the P1 time window revealed that the neuronal generator of the effect was mostly localized in occipital cortices (with the peak of activation at Talairach coordinates x = 12, y = −98, z = −3, i.e., overlapping with the V1/V2 site targeted during ccPAS), but also included a source compatible with the location of pSTS, with significantly higher activations for Exp5 STS-V1 than Ctrl V1- STS (Fig. ). The ccPAS × Cluster ANOVAs on baseline-corrected N170 and P2 amplitudes showed no significant effects (all F ≤ 1.20; all p ≥ 0.28), suggesting that ccPAS selectively influenced early (P1) but not later ERP components (see Supplementary Table ). A first TMS-EEG co-registration study was designed to track signal propagation from pSTS to V1/V2 and estimate its timing. To this aim, we administered active and sham single-pulse TMS over the right pSTS in a group of 10 participants while EEG signals were continuously recorded. We analyzed the time-course of TMS-evoked responses at the sensor level (occipital electrodes O1, Oz, O2) and the source level in a region of interest (ROI) centered over the right occipital pole, corresponding to the location of V1/V2. Following pSTS stimulation, the maximal EEG peak of TMS-evoked activity was recorded from occipital electrodes (Fig. ) and the V1/V2 ROI (Fig. ) after ~200 ms, consistent with the recruitment of long-range and polysynaptic reentrant temporo-occipital connections. Smaller and short-lasting activations were also observed in both V1/V2 and pSTS after ~100 ms, but they were not temporally specific or clearly distinct from activations observed over the stimulated pSTS itself (Supplementary Fig. ). Based on these findings, in Experiments 2–5 we selected 200 ms as the critical ISI for targeting pSTS-to-V1/V2 back-projections, thus devising a novel long-latency(200 ms) ccPAS protocol (Supplementary Fig. ). An ISI of 100 ms was used as a control for testing the protocol’s temporal specificity. In Experiment 2, we tested whether a long-latency ccPAS protocol aimed at strengthening pSTS-to-V1/V2 back-projections modulates the ability to perceive emotional expressions under challenging and noisy perceptual conditions. Forty-two participants were randomly assigned to three different stimulation groups, according to the temporal properties of the ccPAS protocol (Fig. ). The Experimental group (Exp2 STS-V1 ), underwent a ccPAS protocol repeatedly activating the pathway connecting pSTS and V1/V2; this protocol involved the administration of 90 pairs of TMS pulses, with the first pulse of each pair targeting the right pSTS and the second pulse targeting V1/V2 after a 200-ms ISI. Based on Experiment 1, pSTS-V1/V2 stimulation with this ISI was expected to induce STDP in temporo-occipital back-projections. Two control ccPAS conditions similarly targeted pSTS and V1/V2 but the ISI between the pulses was manipulated to prevent induction of STDP. In the first control group (Ctrl 100ms ), we set the ISI between pSTS and V1/V2 stimulation to 100  ms, whereas in the second control group (Ctrl 0ms ), the TMS pulses were delivered simultaneously (i.e., at an ISI of 0 ms; see Methods for details). Participants in all groups were asked to perform the same emotion perception task (Fig. ), whereby faces showing expressions associated with happiness and fear were briefly presented in a sandwich masking procedure at three different exposure durations: 17, 33, or 50 ms (Fig. ). The task was administered before undergoing the assigned ccPAS protocol (i.e., at baseline), immediately after the ccPAS (T0), and again at 20, 40, 60, and 80 min following the ccPAS procedure (T20-T80; Fig. ). A preliminary control analysis ensured that performance at baseline was comparable across all groups in Experiment 2, as well as in the other experiments (Supplementary Table , , and ). The ccPAS (Exp2 STS-V1 , Ctrl 0ms , Ctrl 100ms ) × Exposure time (17, 33, 50 ms) × Time from ccPAS (T0, T20, T40, T60, T80) ANOVA on baseline-corrected d’ values showed a non-significant main effect of ccPAS ( F 2,39 = 2.71; p = 0.079) and a significant ccPAS × Exposure time interaction ( F 4,78 = 2.56; p = 0.045; η p 2 = 0.12; see Fig. ). Post-hoc analysis showed that expression recognition improved following the ccPAS protocol in the most difficult condition — i.e., when faces were briefly displayed for 17 ms — but only for participants assigned to the experimental group (Exp2 STS-V1 ). The improvement observed in the experimental group at the 17-ms exposure time was greater than in any other condition (all p ≤ 0.044; all Cohen’s d ≥ 0.54; black asterisks in Fig. ); there were no other differences between groups or exposures (all p ≥ 0.11). A further two-tailed t -test showed that d’ values for expression recognition at an exposure timing of 17 ms were statistically higher following Exp2 STS-V1 ccPAS compared to the pre-ccPAS baseline ( t 13 = 4.39; p < 0.001; Cohen’s d = 1.17; red asterisks in Fig. ). No other ANOVA main effects or interactions reached significance (all F ≤ 1.91; all p ≥ 0.13), including the 3-way interaction ( F 8.4,163.0 = 1.05; p = 0.37). This indicates that the increased sensitivity, contingent upon Exp2 STS-V1 ccPAS, was comparable across post-ccPAS time points and lasted for at least 80 min. Finally, increased sensitivity to briefly presented emotional faces was not due to changes in decision criteria or speed/accuracy trade-offs, as we observed no effect of ccPAS on response bias (β) or response times (RTs) (Supplementary Table ). Experiment 3 investigated the directional specificity of the neurostimulation protocol. Thirty-nine new participants performed the same task and underwent the same general procedure used in Experiment 2 and were evenly assigned to three groups based on the ccPAS protocol (Fig. ). The Experimental group (Exp3 STS-V1 ) was subjected to the same ccPAS protocol described in Experiment 2, i.e., first pulse over pSTS and second pulse over V1/V2 at the critical 200-ms ISI. In the first control group (Ctrl V1-STS ), we reversed the order of the two TMS pulses: the first pulse of each TMS pair was delivered to V1/V2 and the second pulse to pSTS using the same 200-ms ISI, to potentially target feedforward connections from V1/V2 to pSTS. In a second control group (Ctrl Sham ), the ccPAS protocol was delivered using the same parameters as in the experimental condition but with the coil tilted at 90 degrees, thereby preventing the induction of currents in the brain. The ccPAS (Exp3 STS-V1 , Ctrl V1-STS , Ctrl Sham ) × Exposure time (17, 33, 50  ms) × Time from ccPAS (T0, T20, T40, T60, and T80) ANOVA on baseline-corrected d’ values showed a main effect of Exposure time ( F 1.7,61.9 = 4.32; p = 0.02; ηp 2 = 0.11) and, more importantly, a significant ccPAS Exposure × Time interaction ( F 3.4,61.9 = 2.64; p = 0.05; η p 2 = 0.13; Fig. ). We replicated the main results of Experiment 2, with a selective improvement in expression recognition only in the group subjected to pSTS-to-V1/V2 ccPAS at a 200-ms ISI (Exp3 STS-V1 ) and during short stimulus exposures (17 ms) relative to the other conditions (all p ≤ 0.037; all Cohen’s d ≥ 0.67; black asterisks in Fig. ). Notably, post-hoc analyses showed no changes in face recognition when feedforward connections were targeted (Ctrl V1-STS ; all p ≥ 0.17) or when sham stimulation was administered (Ctrl Sham ; all p ≥ 0.16). A two-tailed t -test showed that d’ values for perception of facial expressions displayed for 17 ms were higher following Exp3 STS-V1 ccPAS compared to baseline levels ( t 12 = 3.80; p =  0.003; Cohen’s d = 1.05; red asterisks in Fig. ). No other main effects or interactions reached significance (all F ≤ 1.55; all p ≥ 0.17). Once again, in Experiment 3, changes in sensitivity were not due to shifts in decision criteria or to speed/accuracy trade-offs (Supplementary Table ). Experiment 4 tested the functional specificity of pSTS-to-V1/V2 ccPAS. We substituted the emotion perception task with a gender perception task, while keeping the stimuli and ccPAS protocols identical (Fig. ). As previously established – , the gender perception task involves the ability to process morphological facial features and relies on ventral occipito-temporal face areas, rather than pSTS. Twenty-eight new participants were evenly assigned to two different groups. One gender task group was subjected to the same active stimulation of pSTS-to-V1/V2 back-projections at a 200-ms ISI that proved effective at enhancing emotion perception in the previous experiments (Ctrl-Gender STS-V1 ), while another control group received sham stimulation (Ctrl-Gender Sham ) (Fig. ). We found no evidence that the same ccPAS protocol targeting pSTS-to-V1/V2 connections, which previously enhanced perception of emotions, also modulates perception of gender. In fact, the ccPAS condition (Ctrl-Gender STS-V1 , Ctrl-Gender Sham ) × Exposure time (17, 33, 50  ms) × Time from ccPAS (T0, T20, T40, T60, T80) ANOVA showed no significant main effects or interactions (all F ≤ 2.31; p ≥ 0.08; Supplementary Table ). In addition, we found no effect of ccPAS on β or RTs (Supplementary Table ). To further assess the functional specificity of pSTS-to-V1/V2 back-projections, we directly compared the three groups that received the same pSTS-to-V1/V2 ccPAS protocol but performed either the emotion perception task (in Experiments 2 and 3) or the gender perception task (in Experiment 4). The Experiment (Exp2 STS-V1 , Exp3 STS-V1 , Ctrl-Gender STS-V1 ) × Exposure time (17, 33, 50 ms) × Time from ccPAS (T0, T20, T40, T60, T80) ANOVA on baseline-corrected d’ values showed significant main effects of Experiment ( F 2,38 = 5.53; p = 0.008; η p 2 = 0.23) and Exposure time ( F 2,76 = 11.27; p < 0.001; η p 2 = 0.23) and a significant Experiment × Exposure time interaction ( F 4,76 = 3.16; p = 0.02; η p 2 = 0.14; Fig. ). Importantly, d’ values increased at 17-ms exposure times for emotion perception, but not for gender perception under identical neurostimulation and visual presentation conditions. In both Exp2 STS-V1 and Exp3 STS-V1 , d’ values for emotion perception increased at 17-ms exposure times compared to the other exposure durations (all p ≤ 0.034; all Cohen’s d ≥ 0.54; black asterisks in Fig. ); moreover, these increases in d’ were larger than the comparable values for gender perception in the Ctrl-Gender STS-V1 group (all p ≤ 0.03; all Cohen’s d ≥ 0.86). The increase in d’ for emotion perception was comparable across Experiments 2 and 3 ( p = 0.41). No other effects were observed (all F ≤ 1.26; all p ≥ 0.27). Experiment 5 integrated ERPs to investigate the electrophysiological correlates of improved visual perception of emotions induced by long-latency ccPAS (Figs. and ). Thirty-six new participants were randomly assigned to the Experimental group (Exp5 STS-V1 ) targeting pSTS-to-V1/V2 back-projections with the critical 200-ms ISI, or a Control group (Ctrl V1-STS ) in which we reversed the order of the two TMS pulses, as in Experiment 3. To prevent EEG activity due to the initial presentation of scrambled picture, the first projected image was a face (i.e., sandwich masking was simplified to backward masking; Fig. ). Moreover, we concentrated on the most relevant conditions: face-evoked ERPs were recorded during an emotion perception task at baseline (pre-ccPAS) and right after ccPAS (T0), and for the shortest stimulus exposure (17 ms), which was the only presentation condition previously found to be affected by ccPAS. All other aspects of the behavioral task remained identical to the previous experiments. Figure shows the results of Experiment 5. Behavioral findings replicated the perceptual improvement observed in prior experiments. Indeed, an ANOVA with the factor ccPAS (Exp5 STS-V1 , Ctrl V1-STS ) on baseline-corrected d’ values showed a greater increase in Exp5 STS-V1 compared to Ctrl V1-STS ( F 1,33 = 7.64; p = 0.009; η p 2 = 0.19; black asterisks in Fig. ). We also observed significantly higher d’ values following Exp5 STS-V1 ccPAS compared to pre-ccPAS baseline levels ( t 17 = 4.41; p < 0.001; Cohen’s d = 1.04; red asterisks in Fig. ). See Supplementary Table for RTs and β. We investigated the influence of ccPAS on three key ERP components (P1, N170, P2) recorded over the right (stimulated) and the left (unstimulated) hemispheres (Fig. ; Supplementary Table ). The ccPAS selectively affected the P1 component recorded over the targeted hemisphere. A ccPAS (Exp5 STS-V1 , Ctrl V1-STS ) x Electrode cluster (Left, Right) ANOVA on baseline-corrected P1 amplitudes showed a main effect of Electrode cluster ( F 1,34 = 10.53; p = 0.003; η p 2 = 0.18), no main effect of ccPAS ( F 1,34 = 0.63; p = 0.43), but, importantly, a ccPAS × Electrode interaction ( F 1,34 = 7.24; p = 0.01; η p 2 = 0.18; Fig. ). Following Exp5 STS-V1 ccPAS, we observed an enhancement of P1 amplitudes over the right electrode cluster (PO8, P8, P6) relative to the other conditions (all p ≤ 0.03; all Cohen’s d ≥ 0.69; black asterisks in Fig. ). No modulation was observed in the Ctrl V1-STS group between clusters ( p = 0.70). A two-tailed t -test showed that P1 amplitudes in the right cluster – corresponding to the targeted hemisphere during ccPAS – were higher following Exp5 STS-V1 ccPAS compared to baseline levels ( t 17 = 3.77; p = 0.002; Cohen’s d = 0.89; red asterisks in Fig. ). Scalp maps show that the ERP enhancement in the P1 time window following Exp5 STS-V1 ccPAS, but not Ctrl V1-STS , was mostly localized over right posterior electrodes (Fig. ). Source estimation in the P1 time window revealed that the neuronal generator of the effect was mostly localized in occipital cortices (with the peak of activation at Talairach coordinates x = 12, y = −98, z = −3, i.e., overlapping with the V1/V2 site targeted during ccPAS), but also included a source compatible with the location of pSTS, with significantly higher activations for Exp5 STS-V1 than Ctrl V1- STS (Fig. ). The ccPAS × Cluster ANOVAs on baseline-corrected N170 and P2 amplitudes showed no significant effects (all F ≤ 1.20; all p ≥ 0.28), suggesting that ccPAS selectively influenced early (P1) but not later ERP components (see Supplementary Table ). How the human visual system represents emotional signals and coordinates behavioral responses is a thriving topic in neuroscience. The present study delineates the features that permit short-term synaptic strengthening of reentrant connections from pSTS to V1/V2, leading to enhanced perception of facial expressions. Prior work has suggested a functional coupling between pSTS and V1/V2 at rest and during emotion perception – . Although growing evidence suggests a key role for reentrant networks in visual awareness and perceptual decision-making – , – , there has been no prior attempt to use ccPAS to modulate back-projections between critical visual nodes within a network for emotion processing. Indeed, causal evidence supporting the effects of recursive processing in the human visual system was limited to the involvement of the V5/MT-to-V1 pathway in low-level motion perception – . Moreover, prior ccPAS investigations into the role of back-projections have focused on perceptual performance – , leaving it unclear whether behavioral enhancements are mediated by neurophysiological changes in the targeted networks. Here we found that exogenous manipulation of pre- and post-synaptic nodes in the pSTS-to-V1/V2 pathway, in accordance with the temporal parameters of Hebbian plasticity, leads to plastic changes that boost sensitivity to facial expressions for at least 80 min. This effect was consistently observed across three independent experiments (Exp STS-V1 groups in Experiments 2, 3 and 5) and occurred only when the temporal features of the ccPAS protocol matched those estimated to be optimal from the TMS-evoked EEG response in Experiment 1 (i.e., with a 200-ms ISI; Supplementary Fig. ). The enfolding and long-lasting behavioral effect of ccPAS (see Supplementary Fig. ) appears in keeping with prior TMS studies on Hebbian plasticity in humans , – . In fact, visual sensitivity was increased immediately after the stimulation protocol and persisted for at least 80 min, following an inverse U-shaped curve peaking at around 40–60 min (Supplementary Fig. ). The improved ability to recognize emotions in observed facial expressions was paralleled by an increase in the P1 component amplitude post-ccPAS, reflecting strengthened temporo-occipital activations during early stages of visual processing. This electrophysiological biomarker, though measurable in pSTS, was maximal over V1/V2 where neural activations were expected to converge due to ccPAS targeting of pSTS-to-V1/V2 back-projections. In keeping with this idea, the neuronal generators of the enhanced P1 activity nicely corresponded to the coordinates of the stimulated sites during ccPAS and were not found following stimulation of the same sites in forward direction (V1/V2-to-pSTS). Perceptual enhancement did not take place if the requirements of STDP in the pSTS-to-V1/V2 pathway were not met. In fact, none of the participants in the conditions controlling for timing (Ctrl 0ms and Ctrl 100ms ), directionality (Ctrl V1-STS ) or nonspecific effects (Ctrl Sham ) improved in emotion perception following ccPAS administration. Compelling evidence indicates that perceptual decisions arise as the consequence of recursive loops in the visual system, where initial activity in early areas is “explained away” by backward-flowing information from structures at subsequent processing stages, until the visual representation stabilizes and recognition is achieved , , . Recurrent processing seems particularly important when stimuli are degraded, noisy, or otherwise ambiguous , . As it happens, initial responses to degraded stimuli are weakened in the visual system but may be re-instantiated by recurrent processing – . Our results are in keeping with this notion; in fact, ccPAS improved emotion perception only when faces were briefly presented (17 ms) under noisy (masking) conditions, whereas no significant variation in perceptual sensitivity was observed at longer exposures. One may wonder why no changes in performance or electrophysiological responses were detected following ccPAS in the Ctrl V1-STS groups of Experiments 3 and 5. In principle, reversing the order of the ccPAS pulses (i.e., first TMS pulse over V1/V2, second pulse over pSTS) should strengthen feedforward connections in the network. However, the lack of reliable electrophysiological changes following Ctrl V1-STS ccPAS suggests that this protocol was less effective than Exp STS-V1 ccPAS in modulating cortico-cortical networks. Notably, the ISI of the ccPAS protocol was selected based on TMS-EEG co-registration , aimed at detecting remote effects of pSTS stimulation over early visual cortices, which yielded maximal activity in V1/V2 after ~200 ms. Here we have demonstrated that this approach can guide exogenous manipulation of long-latency reentrant connectivity from pSTS to V1/V2. Future studies could use a similar TMS-EEG approach (i.e., investigating remote effects of V1/V2 stimulation over higher-order visual areas) to develop and test feedforward-specific ccPAS manipulations. The present study also addressed the functional specificity of the long-latency ccPAS procedure. Participants in the Exp2 STS-V1 , Exp3 STS-V1 and Exp5 STS-V1 groups showed a consistent improvement in emotion perception, whereas no effect on gender perception was found under identical conditions in Experiment 4, even though the two tasks were matched for difficulty. This enhanced sensitivity to emotional expressions cannot be attributed to changes in response bias, nonspecific effects, or speed/accuracy trade-offs (Supplementary Table and ). Neural populations in sectors of pSTS respond specifically to emotional expressions, as convincingly demonstrated by single-cell recordings in monkeys and humans , . Likewise, pSTS shows greater activation when participants are asked to recognize facial expressions , than when they are asked to recognize faces based on morphological cues. Indeed, visual recognition of facial identity, age, or gender relies more on ventral occipito-temporal face-selective areas – . Accordingly, repetitive TMS studies have provided causal evidence for a dissociation between OFA and pSTS in discriminating faces based on morphological vs. expressive features, respectively – . Our study expands these prior findings by providing evidence of a functionally specific cortico-cortical neural mechanism through which pSTS can contribute to accurate perception of emotional facial expressions via top-down influence over early visual cortex. Evidence from non-human primates shows that the dominant direction of signal flow within occipito-temporal networks is feedforward shortly after stimulus onset (i.e., within 150–200 ms), but then gradually reverses to feedback at longer latencies and remains so at rest , . The long-latency 200-ms ISI of the critical ccPAS protocol is in keeping with these neural dynamics and suggests that its efficacy depends upon the relative delay of backward communication within temporo-occipital networks , . We cannot rule out the possibility that the ccPAS protocol with a 200-ms ISI also indirectly recruited larger networks besides occipito-temporal areas. Indeed, TMS effects can be site- and function-specific, but not necessarily site-limited , . Yet, we did not find support for this possibility, as the EEG results in Experiment 5 located the effects of ccPAS within the stimulated sites of the occipito-temporal network, primarily in V1/V2, as predicted by Hebbian STDP. Rapid feedback interactions coexist with the initial and dominant feedforward signal flow and can influence basic levels of visual processing – . Likewise, early interference with pSTS through rTMS in the 60–140-ms time window impairs ongoing perception of facial expressions . These findings suggest that pSTS exerts its influence on emotion processing at multiple stages. This does not contradict our results. In fact, the early influence of pSTS on perception was reported during a single site, online rTMS interference protocol in an emotional face recognition task , whereas the long latency of our ccPAS protocol referred to the time window of maximal pSTS-to-V1/V2 interactions evidenced by single pulse TMS-EEG co-registration at rest (Experiment 1). The ccPAS protocol itself was administered offline in Experiments 2–5, and its behavioral impact was measured in a subsequent task. Further evidence of multiple temporal windows of pSTS influence also comes from Experiment 1, where we detected faster (although smaller and short-lasting) effects of pSTS stimulation on activity in early visual areas. These findings may inform further ccPAS manipulations based on rapid pSTS-V1/V2 feedback interactions, and future research could leverage both short- and long-latency neural interactions to promote Hebbian plasticity. In conclusion, our study demonstrated that ccPAS aimed at strengthening the synaptic efficacy of long-latency pSTS-to-V1/V2 connections selectively enhances visual sensitivity to facial expressions. We provided the first causal evidence that pSTS-to-V1/V2 connections are malleable and afford a neural mechanism functionally relevant to emotion recognition. Furthermore, plastic enhancement critically depended on a time-resolved pairing of pre-and post-synaptic nodes that mimics STDP of temporo-occipital interactions. Our study thus provides proof of principle that long-latency ccPAS can be used to improve visual functions in healthy humans. These findings have theoretical and methodological implications, as they suggest that ccPAS can target complex cortico-cortical pathways while maintaining functional specificity. Moreover, we add to the growing literature showing the potential utility of non-invasive brain stimulation for improving cortical functions in humans , , – . Participants A total of 155 healthy young adults were involved in the study. In Experiment 1, 10 participants (6 females and 4 males; mean age ± standard deviation: 22.1 y ± 2.2) were tested using TMS-EEG co-registration. In Experiment 2, 42 participants (22 females and 20 males; 23.9 y ± 2.2) were randomly assigned to one of three ccPAS conditions (Exp2 STS-V1 , Ctrl 0ms , Ctrl 100ms ) testing the temporal specificity of backward connectivity. In Experiment 3, 32 participants (15 females and 17 males; 23.6 y ± 2.8) were randomly assigned to one of three ccPAS conditions (Exp3 STS-V1 , Ctrl V1-STS , Ctrl Sham ) testing for directional specificity and nonspecific effects. In Experiment 4, 28 participants (19 females and 9 males; 22.8 y ± 2.5) were randomly assigned to one of two conditions (Ctrl-Gender STS-V1 , Ctrl-Gender Sham ) testing for task specificity. Finally, in experiment 5, 36 participants (15 females and 21 males; 22.9 y  ±  2.6) were randomly assigned to one of two ccPAS conditions (Exp5 STS-V1 , Ctrl V1-STS ) testing for directional specificity using behavioral and ERP methods. Participants were recruited through a combination of printed and electronic advertisements displayed on notice boards at different University of Bologna sites, as well as through word of mouth. Four additional participants were tested in the initial phases of Experiments 2–4 but excluded because of technical failures, either before or during ccPAS administration. In Experiment 5, baseline behavioral data from one participant in the control group were lost due to a technical failure, so this participant was excluded from analyses of behavioral data, but included for EEG data analyses. No participant was tested in more than one experiment. We chose the sample size of Experiment 1 based on prior TMS-EEG work investigating TMS-evoked responses – . We estimated the sample of the experimental groups in Experiments 2 and 3 based on prior work in our lab investigating the effect of V5-V1 ccPAS on motion perception , and the effect of STS-rTMS on emotion perception , all showing large effect sizes (mean Cohen’s d = 1.10). Using G*Power 3 software with power (1−β) = 0.95 and α = 0.05, we estimated that a sample of 11 participants would be sufficient to show baseline vs. post-ccPAS differences in the experimental groups. We decided to slightly increase this sample to 13/14 participants for each experimental or control group in Experiments 2–4. Moreover, we increased the sample to 18 participants for each group of Experiment 5, testing not only behavioral but also physiological data. The resulting sample sizes of all the experiments were similar to or greater than those of prior STS-rTMS studies on emotion perception – . All the participants were right-handed according to a standard handedness inventory , had normal or corrected-to-normal visual acuity in both eyes, and were naive as to the purposes of the experiment. None had neurological, psychiatric, or medical problems or any contraindication to TMS . Participants provided written informed consent. The procedures were approved by the Bioethics Committee at the University of Bologna and were carried out in accordance with the ethical standards of the Declaration of Helsinki. No discomfort or adverse effects of TMS were reported or noticed during the experimental sessions. Experiment 1: TMS-EEG experiment We used TMS-EEG co-registration to track the time-course of the pSTS influence over V1/V2 and thus identify a critical ISI for designing the ccPAS protocol we would use in Experiments 2–4. Participants received 60 active and 60 sham TMS pulses at rest over a right pSTS site that was identified using neuronavigation (see below). EEG signals were acquired with a TMS-compatible EEG amplifier (BrainAmp DC, BrainProducts GmbH, Germany) and 60 electrodes (EasyCap GmbH, Germany) mounted on an elastic cap according the standard 10/5 coordinate system. To monitor eye movements and blinks, three electrodes were placed on the outer canthi of both eyes and beneath the left eye. Reference and ground electrodes were placed on the right mastoid and AFz, respectively. The impedance was kept below 5 kΩ, and the electrode lead wires were arranged properly in order to reduce the TMS-induced electrical artifact . EEG signals were digitized at a sampling rate of 5 kHz and low-pass filtered at 1 kHz (DC-recording). The analysis was performed using EEGLAB 2022.1 running on MATLAB. The fast-rising, fast-falling magnetic artifact and the early TMS-evoked muscle activity were removed by cutting and interpolating (cubic interpolation) the EEG signals in the interval from 1 ms before to 20 ms after TMS. A high-pass filter (Hamming windowed sinc FIR filter, cutoff frequency = 0.01 Hz) was then applied and signals were down-sampled to 1000 Hz. Continuous signals were segmented into a window (−100, 600 ms) around the TMS pulse and baseline-corrected to a time period of 90 ms (–100 to –10 ms) preceding TMS administration. EEG data were preprocessed to remove noisy epochs and correct muscular or eye artifacts with independent components analysis . TMS-evoked responses were analyzed at the sensor and source level to identify activity peaks in V1/V2 following pSTS stimulation. For sensor-level analyses, we averaged the signals from posterior occipital electrodes (O1, Oz, O2). For the source analysis, we estimated current source densities by projecting scalp potentials to source space using standardized low-resolution brain electromagnetic tomography (sLORETA - v20171101) – . TMS responses were projected onto a realistic head model based on the MNI152 template and restricted to cortical gray matter. A region of interest (ROI) approach was applied to measure the time-course of cortical responses in V1/V2. Specifically, a spherical ROI with a 20 mm radius was centered on the V1/V2 stimulation coordinates (V1-ROI) used in Experiments 2–4 (see the Neurostimulation paragraph below), and mean activity was extracted across the voxels contained within the ROI. In order to rule out possible contamination due to the spread of local activation in the TMS target area, mean activity after both active and sham TMS was extracted from a spherical ROI (20 mm radius) centered on pSTS coordinates (STS-ROI, see supplementary material). Experiments 2–5: general design The experiments were programmed using MATLAB 2011b software to trigger TMS pulses, control stimulus presentation, and acquire behavioral responses. In each experiment, participants were randomly assigned to different groups according to the ccPAS protocol they would undergo. To test the effect of ccPAS on behavior, participants performed an emotion perception task (Experiments 2, 3 and 5) or a gender perception task (Experiment 4) before undergoing their assigned ccPAS protocol (i.e., at baseline), immediately after ccPAS administration (T0), and 20 (T20), 40 (T40), 60 (T60) and 80 (T80) minutes after ccPAS. In Experiment 5, participants performed the emotion perception task at baseline and T0 while we simultaneously recorded EEG activity. For Experiments 2-5, we implemented a double-blind procedure: participants were blinded to group allocation, and the experimenters who collected and analyzed the data were blinded to the ccPAS conditions. The experimenters who administered ccPAS were not blinded to group allocation because they had to set TMS parameters (i.e., order of pulses, ISI, and orientation of the coils). Experiments 2, 3 and 4: ccPAS and behavior Pictures of faces displaying expressions associated with emotions were presented on a 19-inch screen located about 70 cm away from the participant. 16 happy and 16 fearful expressions from 16 models (8 females and 8 males) were selected from the Nimstim database and adapted using Adobe Photoshop. Mirror-reflected copies of the faces were also created, so that the total number of stimuli was 64. Each face was cropped using an elliptical stencil to exclude hair, ears and neck so that we could rule out any effects of other physical components besides the facial expression (Fig. ). Using a custom-made MATLAB script, we created mosaic pattern pictures made up of scrambled fragments of each face; we employed these stimuli as visual masks, each preserving the elliptical form, the color and the spatial frequency of the original picture , . In Experiments 2 and 3, participants performed a 2-alternative forced choice (2AFC) emotion perception task. On each trial, they were presented with a face and asked to discriminate the target’s perceived emotional expression (forced choice: “happy” or “fearful”). In Experiment 4, participants were exposed to the same pictures but asked to perform a 2AFC gender perception task, requiring them to report the target’s perceived gender (forced choice: “female” or “male”). The tasks were performed in blocks of 192 trials, including 3 sandwich-masked repetitions of the 64 face stimuli using 3 different exposure times. Each trial started with a gray screen (600 ms duration), followed by a forward masking stimulus (17 ms duration) that preceded the target face presented at the center of the screen (Fig. ). Faces were presented for 17, 33 or 50 ms, and then immediately replaced by a backward masking stimulus, which remained on the screen for 50, 33 or 17 ms respectively, to keep a constant stimulus duration of ~83 ms. A black screen was presented until the participants responded. Participants were provided their response by pressing one of two different keys on a keyboard with the index or middle finger of their right hand. They were asked to be as fast and as accurate as possible. Response–button correspondence was randomized across participants. Each block lasted approximately 5 min. Experiment 5: ccPAS, behavior and ERPs Participants performed a 2AFC emotion perception task as in Experiments 2–4 while EEG was simultaneously acquired. Visual stimuli consisted of pictures of faces displaying expressions associated with emotion presented on a 15-inch screen located about 60 cm from the participant. We used 10 happy and 10 fearful expressions from 10 models (5 females and 5 males) from the pool used in Experiments 2–4. Mirror-reflected copies of the faces were also created, so that the total number of stimuli was 40 and each face was presented three times, for a total of 120 trials. As shown in Fig. , each trial started with a white screen (800 ms duration), followed by the target face (17 ms duration) presented at the center of the screen and immediately replaced by a backward masking stimulus, which remained on the screen for 33 ms, to keep a constant stimulus duration of ~50 ms. A gray screen was presented until the participant’s response. EEG was acquired with the same EEG system and software as in Experiment 1. The signal was down-sampled to 500 Hz, low-pass filtered (cut-off frequency = 40 Hz, FIR filter), and re-referenced to the linked mastoid. Continuous signals were epoched in a window (−200, 600 ms) around the stimulus. Unique, non-stereotyped artifacts such as eye blinks were corrected using independent components analysis. Bad epochs (the ones presenting huge rubbing artifacts or undefined significant noise) were removed by visual inspection. ERP components (P1, N170, P2) were calculated separately for each channel and condition by selecting a 40-ms time window for the P1 (100–140 ms), N170 (150–190 ms) and P2 (200-240) components , and computing peak amplitudes. For sensor level analyses, we averaged the signals from occipito-parietal clusters of electrodes in the right (PO8, P8, P6) and left (PO7, P7, P5) hemispheres , after visual inspection of each component for each electrode without considering the condition to avoid circularity . ccPAS protocols The ccPAS protocols were delivered with a Magstim BiStim2 machine (Magstim Company, UK) via two 50 mm figure-of-eight coils placed over the right pSTS and V1/V2. 90 pairs of stimuli were continuously delivered at a rate of 0.1 Hz for ~15 min – , , , – , with each pair of stimuli consisting of two monophasic transcranial magnetic pulses. The pulses were triggered remotely using a computer that controlled both stimulators. TMS intensity was set to 60% of the maximum stimulator output . The ccPAS protocol was manipulated in different groups of participants in Experiments 2–5. Experiment 2: Testing time-specific activation of backward connections Experimental condition: Exp2 STS-V1 In each TMS pair, the first pulse was delivered to pSTS and followed by a second pulse delivered to V1/V2 with an ISI of 200 ms, in accordance with Experiment 1. This timing was critical to induce convergent activation of V1/V2 neurons via stimulation of pSTS and V1/V2 and thus induce STDP in pSTS-to-V1/V2 pathways – . The protocol was designed to strengthen reentrant connections from pSTS to V1/V2, thus enhancing the area of convergent activation, i.e., V1/V2 (Supplementary Fig. ). Simultaneous active control for timing: Ctrl 0ms In this condition, both pulses were delivered simultaneously (ISI = 0 ms). According to the Hebbian principle , , a synapse increases its efficacy if the presynaptic neuron persistently takes part in firing the post-synaptic target neuron. However, if two neurons fire at the same time, then one cannot have caused or taken part in firing the other. Thus, although I-wave interactions may occur during simultaneous TMS pulses , no net STDP is expected following Ctrl 0ms , . Asynchronous active control for timing: Ctrl 100ms Stimulation was identical to that of the experimental condition except that pulses were delivered at a non-optimal ISI of 100 ms. Based on Experiment 1, the cortico-cortical volley elicited by pSTS stimulation (first pulse) would not consistently activate V1/V2 neurons at the time of exogenous V1/V2 stimulation (second pulse), thus failing to produce the convergent V1/V2 activation which is crucial for inducing STDP. This ccPAS condition controlled for timing-dependent effects. That is, it allowed us to verify that effects found in the Exp2 STS-V1 condition were timing dependent and not provoked by any consistent stimulation pairing the targeted areas. Experiment 3: Testing direction-specific activation of the pSTS-V1/V2 network Experimental condition: Exp3 STS-V1 This pSTS-to-V1/V2 ccPAS group was identical to the Exp2 STS-V1 condition and aimed at replicating the effect observed in Experiment 2. Active control for direction: Ctrl V1-STS In this condition we switched the direction of the associative pulses: the first pulse was given to V1/V2 and the second pulse to pSTS at the same ISI as the experimental condition (i.e., 200  ms). The Ctrl V1-STS group controlled for direction-dependent effects. That is, it allowed us to verify that any effect found in the experimental condition was the result of enforced feedback connections (from pSTS to V1/V2) and not found when reversing the order of the pulses, potentially activating feedforward connections. Sham control for nonspecific effects: Ctrl Sham Stimulation in this sham condition was identical to that of the experimental condition, except for the fact that the TMS coils were tilted at 90 degrees, so that no current was induced in the brain throughout the ccPAS session. Experiment 4: Testing task specificity Active stimulation for control task: Ctrl-Gender STS-V1 The ccPAS protocol was identical to the experimental conditions of Experiments 2 and 3, but participants performed a gender perception task instead of the emotion perception task. Sham stimulation for control task: Ctrl Sham Stimulation in this condition was identical to the Ctrl Sham condition of Experiment 3. Participants performed a gender perception task. Experiment 5: neurophysiological correlates of improved emotional expression perception Experimental condition: Exp5 STS-V1 This pSTS-to-V1/V2 ccPAS protocol was identical to the Exp2 STS-V1 and Exp3 STS-V1 conditions and aimed at replicating the effects observed in Experiments 2 and 3. Active control for direction: Ctrl V1-STS This V1/V2-to-pSTS ccPAS group was identical to the Exp3 Ctrl V1-STS condition and aimed at controlling for direction-dependent effects. In both the experimental and control conditions, the behavioral task was adapted to EEG acquisition. Neuronavigation In all experiments, the pSTS and V1/V2 sites were individually targeted using image-guided neuronavigation. The positions of the two coils were identified on each participant’s scalp using the SofTaxic Navigator System (Electro Medical Systems) as in prior research – , – . Skull landmarks (nasion, inion, and 2 preauricular points) and ~100 points providing a uniform representation of the scalp were digitized by means of a Polaris Vicra digitizer (Northern Digital). An individual estimated magnetic resonance image (MRI) was obtained for each subject through a 3D warping procedure, fitting a high-resolution MRI template with the participant’s scalp model and craniometric points. This procedure has been proven to ensure a global localization accuracy of roughly 5 mm . Stimulation sites were identified in Talairach space on the basis of previous fMRI and TMS studies. When necessary, MNI coordinates were converted into Talairach space using GingerALE v. 2.3.1. The pSTS was localized in the right hemisphere at the coordinates x = 53, y = −49, z = 10, estimated by averaging subject-weighted coordinates identified in a recent meta-analysis during emotion evaluation (75 experiments, 1742 participants) and passive observation of emotional facial expressions (20 experiments, 411 participants). The pSTS site is well in keeping with other brain imaging meta-analyses on emotional face perception , and prior TMS studies – , falling within the range of interindividual variability of the face-selective area in the pSTS reported by Sliwinska and Pitcher . To localize V1/V2, we identified the scalp location that corresponded best to early visual cortex , (x = 19, y = −98, z = 1). The pSTS and V1/V2 scalp locations identified by neuronavigation were marked with a pen on each participant’s head and used to place the coils. Then, SofTaxic automatically estimated the individual Talairach coordinates corresponding to the projection of the targeted scalp sites onto the surface of the MRI-constructed stereotaxic template. These estimated coordinates indicated the most superficial cortical site where TMS effects were expected to be maximal. The mean coordinates (± standard deviation) of the targeted pSTS cortical site corresponded to the most posterior sector of Brodmann area 21 (Experiment 1: x = 56 ± 2, y = −50 ± 2, z = 9 ± 2; Experiment 2: x = 58 ± 3, y = −50 ± 2, z = 9 ± 2; Experiment 3: x = 59 ± 3, y = −49 ± 2, z = 9 ± 1; Experiment 4: x = 57 ± 2, y = −50 ± 2, z = 9 ± 1; Experiment 5: x = 57 ± 3, y = −49 ± 2, z = 9 ± 2). The mean coordinates of the targeted V1/V2 cortical site corresponded to Brodmann area 17 in the middle occipital gyrus (Experiment 2: x = 19 ± 1, y = −96 ± 1, z = 1 ± 1; Experiment 3: x = 19 ± 1, y = −96 ± 1, z = 1 ± 2; Experiment 4: x = 18 ± 1, y = −97 ± 1, z = 0 ± 1; Experiment 5: x = 18 ± 1, y = −97 ± 1, z = 0 ± 1). In Experiment 1, ROIs were centered over the searched coordinates (pSTS-ROI: x = 53, y = −49, z = 10; V1/V2-ROI: x = 19, y = −98, z = 1). Figures , , and Supplementary Fig. display schematic representations of the stimulated sites on a standard MRI template (Colin-27) from MRIcron. Data analysis Behavioral data were processed offline. Response times (RTs) were calculated after removing trials with an incorrect (~13%) or slow (≥ 1 s) response (~4%). Accuracy was converted into measures of sensitivity (d’) and response bias (β) in accordance with signal detection theory . In the emotion (or gender) perception task, two types of responses were scored as correct: a “fearful” (“male”) response to a fearful expression (male face) counted as a hit and a “happy” (“female”) response to a happy expression (female face) counted as a correct rejection. Two types of responses were scored as incorrect: a “fearful” (“male”) response to a happy expression (female face) counted as a false alarm and a “happy” (“female”) response to a fearful expression (male face) counted as a miss. To compare the effects of ccPAS across Experiments 2–4 and normalize the data distributions, changes in performance were baseline-corrected. d’ and RT values at each post-ccPAS time point (T0, T20, T40, T60, T80) and for each exposure time (17, 33, 50 ms) were divided by the corresponding baseline values, whereas post-ccPAS response bias (β) values were baseline-corrected by subtracting baseline values. The same normalization was computed in Experiment 5 for post-ccPAS (T0) values relative to baseline values of d’, RTs and β. In Experiments 2-4, mixed factors ANOVAs were performed on baseline-corrected d’, β and RT values with ccPAS as a between-subjects factor and Exposure time (17, 33, 50 ms) and Time from ccPAS (T0, T20, T40, T60, T80) as within-subjects factors. Behavioral data in Experiment 5 were analyzed using a 1-way ANOVA with the between-subjects factor ccPAS. Electrophysiological data were analyzed offline. Mixed factors ANOVAs were performed on baseline-corrected ERP peak amplitudes (T0 minus baseline) with ccPAS (Exp5 STS-V1 , Ctrl V1-STS ) as a between-subjects factor and Electrode cluster (left, right) as a within-subjects factor. Post-hoc analysis was performed using the Duncan test to correct for multiple comparisons. The Greenhouse–Geisser correction was employed where appropriate. In all the analyses, partial eta squared ( η p 2 ) was computed as a measure of effect size for the main effects and interactions, whereas Cohens’ d was computed for t -tests and post-hoc comparisons. All statistical tests were two-tailed and conducted using Statistica v. 12 (StatSoft, Inc., Tulsa). For source analysis, we estimated current source densities by projecting scalp potentials into source space using the sLORETA method , , as implemented in Brainstorm software , and the ICBM 152 MRI template, as in Experiment 1. To investigate the effect of ccPAS at the source level, we obtained within-group source activation differences (T0 minus baseline) separately for each participant. For the statistical analysis, following the recommended procedures for unconstrained source analysis, we compared the power for each source and computed between-groups power tests (F-tests). We focused on the three time windows already analyzed at the sensor level for the P1, N170 and P2 components and performed three separate statistical tests, one for each time window. The source activation values at all timepoints within each time window were averaged before the analysis. To correct for multiple comparisons, we used the False Discovery Rate method (FDR) . Reporting summary Further information on research design is available in the linked to this article. A total of 155 healthy young adults were involved in the study. In Experiment 1, 10 participants (6 females and 4 males; mean age ± standard deviation: 22.1 y ± 2.2) were tested using TMS-EEG co-registration. In Experiment 2, 42 participants (22 females and 20 males; 23.9 y ± 2.2) were randomly assigned to one of three ccPAS conditions (Exp2 STS-V1 , Ctrl 0ms , Ctrl 100ms ) testing the temporal specificity of backward connectivity. In Experiment 3, 32 participants (15 females and 17 males; 23.6 y ± 2.8) were randomly assigned to one of three ccPAS conditions (Exp3 STS-V1 , Ctrl V1-STS , Ctrl Sham ) testing for directional specificity and nonspecific effects. In Experiment 4, 28 participants (19 females and 9 males; 22.8 y ± 2.5) were randomly assigned to one of two conditions (Ctrl-Gender STS-V1 , Ctrl-Gender Sham ) testing for task specificity. Finally, in experiment 5, 36 participants (15 females and 21 males; 22.9 y  ±  2.6) were randomly assigned to one of two ccPAS conditions (Exp5 STS-V1 , Ctrl V1-STS ) testing for directional specificity using behavioral and ERP methods. Participants were recruited through a combination of printed and electronic advertisements displayed on notice boards at different University of Bologna sites, as well as through word of mouth. Four additional participants were tested in the initial phases of Experiments 2–4 but excluded because of technical failures, either before or during ccPAS administration. In Experiment 5, baseline behavioral data from one participant in the control group were lost due to a technical failure, so this participant was excluded from analyses of behavioral data, but included for EEG data analyses. No participant was tested in more than one experiment. We chose the sample size of Experiment 1 based on prior TMS-EEG work investigating TMS-evoked responses – . We estimated the sample of the experimental groups in Experiments 2 and 3 based on prior work in our lab investigating the effect of V5-V1 ccPAS on motion perception , and the effect of STS-rTMS on emotion perception , all showing large effect sizes (mean Cohen’s d = 1.10). Using G*Power 3 software with power (1−β) = 0.95 and α = 0.05, we estimated that a sample of 11 participants would be sufficient to show baseline vs. post-ccPAS differences in the experimental groups. We decided to slightly increase this sample to 13/14 participants for each experimental or control group in Experiments 2–4. Moreover, we increased the sample to 18 participants for each group of Experiment 5, testing not only behavioral but also physiological data. The resulting sample sizes of all the experiments were similar to or greater than those of prior STS-rTMS studies on emotion perception – . All the participants were right-handed according to a standard handedness inventory , had normal or corrected-to-normal visual acuity in both eyes, and were naive as to the purposes of the experiment. None had neurological, psychiatric, or medical problems or any contraindication to TMS . Participants provided written informed consent. The procedures were approved by the Bioethics Committee at the University of Bologna and were carried out in accordance with the ethical standards of the Declaration of Helsinki. No discomfort or adverse effects of TMS were reported or noticed during the experimental sessions. We used TMS-EEG co-registration to track the time-course of the pSTS influence over V1/V2 and thus identify a critical ISI for designing the ccPAS protocol we would use in Experiments 2–4. Participants received 60 active and 60 sham TMS pulses at rest over a right pSTS site that was identified using neuronavigation (see below). EEG signals were acquired with a TMS-compatible EEG amplifier (BrainAmp DC, BrainProducts GmbH, Germany) and 60 electrodes (EasyCap GmbH, Germany) mounted on an elastic cap according the standard 10/5 coordinate system. To monitor eye movements and blinks, three electrodes were placed on the outer canthi of both eyes and beneath the left eye. Reference and ground electrodes were placed on the right mastoid and AFz, respectively. The impedance was kept below 5 kΩ, and the electrode lead wires were arranged properly in order to reduce the TMS-induced electrical artifact . EEG signals were digitized at a sampling rate of 5 kHz and low-pass filtered at 1 kHz (DC-recording). The analysis was performed using EEGLAB 2022.1 running on MATLAB. The fast-rising, fast-falling magnetic artifact and the early TMS-evoked muscle activity were removed by cutting and interpolating (cubic interpolation) the EEG signals in the interval from 1 ms before to 20 ms after TMS. A high-pass filter (Hamming windowed sinc FIR filter, cutoff frequency = 0.01 Hz) was then applied and signals were down-sampled to 1000 Hz. Continuous signals were segmented into a window (−100, 600 ms) around the TMS pulse and baseline-corrected to a time period of 90 ms (–100 to –10 ms) preceding TMS administration. EEG data were preprocessed to remove noisy epochs and correct muscular or eye artifacts with independent components analysis . TMS-evoked responses were analyzed at the sensor and source level to identify activity peaks in V1/V2 following pSTS stimulation. For sensor-level analyses, we averaged the signals from posterior occipital electrodes (O1, Oz, O2). For the source analysis, we estimated current source densities by projecting scalp potentials to source space using standardized low-resolution brain electromagnetic tomography (sLORETA - v20171101) – . TMS responses were projected onto a realistic head model based on the MNI152 template and restricted to cortical gray matter. A region of interest (ROI) approach was applied to measure the time-course of cortical responses in V1/V2. Specifically, a spherical ROI with a 20 mm radius was centered on the V1/V2 stimulation coordinates (V1-ROI) used in Experiments 2–4 (see the Neurostimulation paragraph below), and mean activity was extracted across the voxels contained within the ROI. In order to rule out possible contamination due to the spread of local activation in the TMS target area, mean activity after both active and sham TMS was extracted from a spherical ROI (20 mm radius) centered on pSTS coordinates (STS-ROI, see supplementary material). The experiments were programmed using MATLAB 2011b software to trigger TMS pulses, control stimulus presentation, and acquire behavioral responses. In each experiment, participants were randomly assigned to different groups according to the ccPAS protocol they would undergo. To test the effect of ccPAS on behavior, participants performed an emotion perception task (Experiments 2, 3 and 5) or a gender perception task (Experiment 4) before undergoing their assigned ccPAS protocol (i.e., at baseline), immediately after ccPAS administration (T0), and 20 (T20), 40 (T40), 60 (T60) and 80 (T80) minutes after ccPAS. In Experiment 5, participants performed the emotion perception task at baseline and T0 while we simultaneously recorded EEG activity. For Experiments 2-5, we implemented a double-blind procedure: participants were blinded to group allocation, and the experimenters who collected and analyzed the data were blinded to the ccPAS conditions. The experimenters who administered ccPAS were not blinded to group allocation because they had to set TMS parameters (i.e., order of pulses, ISI, and orientation of the coils). Experiments 2, 3 and 4: ccPAS and behavior Pictures of faces displaying expressions associated with emotions were presented on a 19-inch screen located about 70 cm away from the participant. 16 happy and 16 fearful expressions from 16 models (8 females and 8 males) were selected from the Nimstim database and adapted using Adobe Photoshop. Mirror-reflected copies of the faces were also created, so that the total number of stimuli was 64. Each face was cropped using an elliptical stencil to exclude hair, ears and neck so that we could rule out any effects of other physical components besides the facial expression (Fig. ). Using a custom-made MATLAB script, we created mosaic pattern pictures made up of scrambled fragments of each face; we employed these stimuli as visual masks, each preserving the elliptical form, the color and the spatial frequency of the original picture , . In Experiments 2 and 3, participants performed a 2-alternative forced choice (2AFC) emotion perception task. On each trial, they were presented with a face and asked to discriminate the target’s perceived emotional expression (forced choice: “happy” or “fearful”). In Experiment 4, participants were exposed to the same pictures but asked to perform a 2AFC gender perception task, requiring them to report the target’s perceived gender (forced choice: “female” or “male”). The tasks were performed in blocks of 192 trials, including 3 sandwich-masked repetitions of the 64 face stimuli using 3 different exposure times. Each trial started with a gray screen (600 ms duration), followed by a forward masking stimulus (17 ms duration) that preceded the target face presented at the center of the screen (Fig. ). Faces were presented for 17, 33 or 50 ms, and then immediately replaced by a backward masking stimulus, which remained on the screen for 50, 33 or 17 ms respectively, to keep a constant stimulus duration of ~83 ms. A black screen was presented until the participants responded. Participants were provided their response by pressing one of two different keys on a keyboard with the index or middle finger of their right hand. They were asked to be as fast and as accurate as possible. Response–button correspondence was randomized across participants. Each block lasted approximately 5 min. Experiment 5: ccPAS, behavior and ERPs Participants performed a 2AFC emotion perception task as in Experiments 2–4 while EEG was simultaneously acquired. Visual stimuli consisted of pictures of faces displaying expressions associated with emotion presented on a 15-inch screen located about 60 cm from the participant. We used 10 happy and 10 fearful expressions from 10 models (5 females and 5 males) from the pool used in Experiments 2–4. Mirror-reflected copies of the faces were also created, so that the total number of stimuli was 40 and each face was presented three times, for a total of 120 trials. As shown in Fig. , each trial started with a white screen (800 ms duration), followed by the target face (17 ms duration) presented at the center of the screen and immediately replaced by a backward masking stimulus, which remained on the screen for 33 ms, to keep a constant stimulus duration of ~50 ms. A gray screen was presented until the participant’s response. EEG was acquired with the same EEG system and software as in Experiment 1. The signal was down-sampled to 500 Hz, low-pass filtered (cut-off frequency = 40 Hz, FIR filter), and re-referenced to the linked mastoid. Continuous signals were epoched in a window (−200, 600 ms) around the stimulus. Unique, non-stereotyped artifacts such as eye blinks were corrected using independent components analysis. Bad epochs (the ones presenting huge rubbing artifacts or undefined significant noise) were removed by visual inspection. ERP components (P1, N170, P2) were calculated separately for each channel and condition by selecting a 40-ms time window for the P1 (100–140 ms), N170 (150–190 ms) and P2 (200-240) components , and computing peak amplitudes. For sensor level analyses, we averaged the signals from occipito-parietal clusters of electrodes in the right (PO8, P8, P6) and left (PO7, P7, P5) hemispheres , after visual inspection of each component for each electrode without considering the condition to avoid circularity . Pictures of faces displaying expressions associated with emotions were presented on a 19-inch screen located about 70 cm away from the participant. 16 happy and 16 fearful expressions from 16 models (8 females and 8 males) were selected from the Nimstim database and adapted using Adobe Photoshop. Mirror-reflected copies of the faces were also created, so that the total number of stimuli was 64. Each face was cropped using an elliptical stencil to exclude hair, ears and neck so that we could rule out any effects of other physical components besides the facial expression (Fig. ). Using a custom-made MATLAB script, we created mosaic pattern pictures made up of scrambled fragments of each face; we employed these stimuli as visual masks, each preserving the elliptical form, the color and the spatial frequency of the original picture , . In Experiments 2 and 3, participants performed a 2-alternative forced choice (2AFC) emotion perception task. On each trial, they were presented with a face and asked to discriminate the target’s perceived emotional expression (forced choice: “happy” or “fearful”). In Experiment 4, participants were exposed to the same pictures but asked to perform a 2AFC gender perception task, requiring them to report the target’s perceived gender (forced choice: “female” or “male”). The tasks were performed in blocks of 192 trials, including 3 sandwich-masked repetitions of the 64 face stimuli using 3 different exposure times. Each trial started with a gray screen (600 ms duration), followed by a forward masking stimulus (17 ms duration) that preceded the target face presented at the center of the screen (Fig. ). Faces were presented for 17, 33 or 50 ms, and then immediately replaced by a backward masking stimulus, which remained on the screen for 50, 33 or 17 ms respectively, to keep a constant stimulus duration of ~83 ms. A black screen was presented until the participants responded. Participants were provided their response by pressing one of two different keys on a keyboard with the index or middle finger of their right hand. They were asked to be as fast and as accurate as possible. Response–button correspondence was randomized across participants. Each block lasted approximately 5 min. Participants performed a 2AFC emotion perception task as in Experiments 2–4 while EEG was simultaneously acquired. Visual stimuli consisted of pictures of faces displaying expressions associated with emotion presented on a 15-inch screen located about 60 cm from the participant. We used 10 happy and 10 fearful expressions from 10 models (5 females and 5 males) from the pool used in Experiments 2–4. Mirror-reflected copies of the faces were also created, so that the total number of stimuli was 40 and each face was presented three times, for a total of 120 trials. As shown in Fig. , each trial started with a white screen (800 ms duration), followed by the target face (17 ms duration) presented at the center of the screen and immediately replaced by a backward masking stimulus, which remained on the screen for 33 ms, to keep a constant stimulus duration of ~50 ms. A gray screen was presented until the participant’s response. EEG was acquired with the same EEG system and software as in Experiment 1. The signal was down-sampled to 500 Hz, low-pass filtered (cut-off frequency = 40 Hz, FIR filter), and re-referenced to the linked mastoid. Continuous signals were epoched in a window (−200, 600 ms) around the stimulus. Unique, non-stereotyped artifacts such as eye blinks were corrected using independent components analysis. Bad epochs (the ones presenting huge rubbing artifacts or undefined significant noise) were removed by visual inspection. ERP components (P1, N170, P2) were calculated separately for each channel and condition by selecting a 40-ms time window for the P1 (100–140 ms), N170 (150–190 ms) and P2 (200-240) components , and computing peak amplitudes. For sensor level analyses, we averaged the signals from occipito-parietal clusters of electrodes in the right (PO8, P8, P6) and left (PO7, P7, P5) hemispheres , after visual inspection of each component for each electrode without considering the condition to avoid circularity . The ccPAS protocols were delivered with a Magstim BiStim2 machine (Magstim Company, UK) via two 50 mm figure-of-eight coils placed over the right pSTS and V1/V2. 90 pairs of stimuli were continuously delivered at a rate of 0.1 Hz for ~15 min – , , , – , with each pair of stimuli consisting of two monophasic transcranial magnetic pulses. The pulses were triggered remotely using a computer that controlled both stimulators. TMS intensity was set to 60% of the maximum stimulator output . The ccPAS protocol was manipulated in different groups of participants in Experiments 2–5. Experiment 2: Testing time-specific activation of backward connections Experimental condition: Exp2 STS-V1 In each TMS pair, the first pulse was delivered to pSTS and followed by a second pulse delivered to V1/V2 with an ISI of 200 ms, in accordance with Experiment 1. This timing was critical to induce convergent activation of V1/V2 neurons via stimulation of pSTS and V1/V2 and thus induce STDP in pSTS-to-V1/V2 pathways – . The protocol was designed to strengthen reentrant connections from pSTS to V1/V2, thus enhancing the area of convergent activation, i.e., V1/V2 (Supplementary Fig. ). Simultaneous active control for timing: Ctrl 0ms In this condition, both pulses were delivered simultaneously (ISI = 0 ms). According to the Hebbian principle , , a synapse increases its efficacy if the presynaptic neuron persistently takes part in firing the post-synaptic target neuron. However, if two neurons fire at the same time, then one cannot have caused or taken part in firing the other. Thus, although I-wave interactions may occur during simultaneous TMS pulses , no net STDP is expected following Ctrl 0ms , . Asynchronous active control for timing: Ctrl 100ms Stimulation was identical to that of the experimental condition except that pulses were delivered at a non-optimal ISI of 100 ms. Based on Experiment 1, the cortico-cortical volley elicited by pSTS stimulation (first pulse) would not consistently activate V1/V2 neurons at the time of exogenous V1/V2 stimulation (second pulse), thus failing to produce the convergent V1/V2 activation which is crucial for inducing STDP. This ccPAS condition controlled for timing-dependent effects. That is, it allowed us to verify that effects found in the Exp2 STS-V1 condition were timing dependent and not provoked by any consistent stimulation pairing the targeted areas. Experiment 3: Testing direction-specific activation of the pSTS-V1/V2 network Experimental condition: Exp3 STS-V1 This pSTS-to-V1/V2 ccPAS group was identical to the Exp2 STS-V1 condition and aimed at replicating the effect observed in Experiment 2. Active control for direction: Ctrl V1-STS In this condition we switched the direction of the associative pulses: the first pulse was given to V1/V2 and the second pulse to pSTS at the same ISI as the experimental condition (i.e., 200  ms). The Ctrl V1-STS group controlled for direction-dependent effects. That is, it allowed us to verify that any effect found in the experimental condition was the result of enforced feedback connections (from pSTS to V1/V2) and not found when reversing the order of the pulses, potentially activating feedforward connections. Sham control for nonspecific effects: Ctrl Sham Stimulation in this sham condition was identical to that of the experimental condition, except for the fact that the TMS coils were tilted at 90 degrees, so that no current was induced in the brain throughout the ccPAS session. Experiment 4: Testing task specificity Active stimulation for control task: Ctrl-Gender STS-V1 The ccPAS protocol was identical to the experimental conditions of Experiments 2 and 3, but participants performed a gender perception task instead of the emotion perception task. Sham stimulation for control task: Ctrl Sham Stimulation in this condition was identical to the Ctrl Sham condition of Experiment 3. Participants performed a gender perception task. Experiment 5: neurophysiological correlates of improved emotional expression perception Experimental condition: Exp5 STS-V1 This pSTS-to-V1/V2 ccPAS protocol was identical to the Exp2 STS-V1 and Exp3 STS-V1 conditions and aimed at replicating the effects observed in Experiments 2 and 3. Active control for direction: Ctrl V1-STS This V1/V2-to-pSTS ccPAS group was identical to the Exp3 Ctrl V1-STS condition and aimed at controlling for direction-dependent effects. In both the experimental and control conditions, the behavioral task was adapted to EEG acquisition. Experimental condition: Exp2 STS-V1 In each TMS pair, the first pulse was delivered to pSTS and followed by a second pulse delivered to V1/V2 with an ISI of 200 ms, in accordance with Experiment 1. This timing was critical to induce convergent activation of V1/V2 neurons via stimulation of pSTS and V1/V2 and thus induce STDP in pSTS-to-V1/V2 pathways – . The protocol was designed to strengthen reentrant connections from pSTS to V1/V2, thus enhancing the area of convergent activation, i.e., V1/V2 (Supplementary Fig. ). Simultaneous active control for timing: Ctrl 0ms In this condition, both pulses were delivered simultaneously (ISI = 0 ms). According to the Hebbian principle , , a synapse increases its efficacy if the presynaptic neuron persistently takes part in firing the post-synaptic target neuron. However, if two neurons fire at the same time, then one cannot have caused or taken part in firing the other. Thus, although I-wave interactions may occur during simultaneous TMS pulses , no net STDP is expected following Ctrl 0ms , . Asynchronous active control for timing: Ctrl 100ms Stimulation was identical to that of the experimental condition except that pulses were delivered at a non-optimal ISI of 100 ms. Based on Experiment 1, the cortico-cortical volley elicited by pSTS stimulation (first pulse) would not consistently activate V1/V2 neurons at the time of exogenous V1/V2 stimulation (second pulse), thus failing to produce the convergent V1/V2 activation which is crucial for inducing STDP. This ccPAS condition controlled for timing-dependent effects. That is, it allowed us to verify that effects found in the Exp2 STS-V1 condition were timing dependent and not provoked by any consistent stimulation pairing the targeted areas. STS-V1 In each TMS pair, the first pulse was delivered to pSTS and followed by a second pulse delivered to V1/V2 with an ISI of 200 ms, in accordance with Experiment 1. This timing was critical to induce convergent activation of V1/V2 neurons via stimulation of pSTS and V1/V2 and thus induce STDP in pSTS-to-V1/V2 pathways – . The protocol was designed to strengthen reentrant connections from pSTS to V1/V2, thus enhancing the area of convergent activation, i.e., V1/V2 (Supplementary Fig. ). 0ms In this condition, both pulses were delivered simultaneously (ISI = 0 ms). According to the Hebbian principle , , a synapse increases its efficacy if the presynaptic neuron persistently takes part in firing the post-synaptic target neuron. However, if two neurons fire at the same time, then one cannot have caused or taken part in firing the other. Thus, although I-wave interactions may occur during simultaneous TMS pulses , no net STDP is expected following Ctrl 0ms , . 100ms Stimulation was identical to that of the experimental condition except that pulses were delivered at a non-optimal ISI of 100 ms. Based on Experiment 1, the cortico-cortical volley elicited by pSTS stimulation (first pulse) would not consistently activate V1/V2 neurons at the time of exogenous V1/V2 stimulation (second pulse), thus failing to produce the convergent V1/V2 activation which is crucial for inducing STDP. This ccPAS condition controlled for timing-dependent effects. That is, it allowed us to verify that effects found in the Exp2 STS-V1 condition were timing dependent and not provoked by any consistent stimulation pairing the targeted areas. Experimental condition: Exp3 STS-V1 This pSTS-to-V1/V2 ccPAS group was identical to the Exp2 STS-V1 condition and aimed at replicating the effect observed in Experiment 2. Active control for direction: Ctrl V1-STS In this condition we switched the direction of the associative pulses: the first pulse was given to V1/V2 and the second pulse to pSTS at the same ISI as the experimental condition (i.e., 200  ms). The Ctrl V1-STS group controlled for direction-dependent effects. That is, it allowed us to verify that any effect found in the experimental condition was the result of enforced feedback connections (from pSTS to V1/V2) and not found when reversing the order of the pulses, potentially activating feedforward connections. Sham control for nonspecific effects: Ctrl Sham Stimulation in this sham condition was identical to that of the experimental condition, except for the fact that the TMS coils were tilted at 90 degrees, so that no current was induced in the brain throughout the ccPAS session. STS-V1 This pSTS-to-V1/V2 ccPAS group was identical to the Exp2 STS-V1 condition and aimed at replicating the effect observed in Experiment 2. V1-STS In this condition we switched the direction of the associative pulses: the first pulse was given to V1/V2 and the second pulse to pSTS at the same ISI as the experimental condition (i.e., 200  ms). The Ctrl V1-STS group controlled for direction-dependent effects. That is, it allowed us to verify that any effect found in the experimental condition was the result of enforced feedback connections (from pSTS to V1/V2) and not found when reversing the order of the pulses, potentially activating feedforward connections. Sham Stimulation in this sham condition was identical to that of the experimental condition, except for the fact that the TMS coils were tilted at 90 degrees, so that no current was induced in the brain throughout the ccPAS session. Active stimulation for control task: Ctrl-Gender STS-V1 The ccPAS protocol was identical to the experimental conditions of Experiments 2 and 3, but participants performed a gender perception task instead of the emotion perception task. Sham stimulation for control task: Ctrl Sham Stimulation in this condition was identical to the Ctrl Sham condition of Experiment 3. Participants performed a gender perception task. STS-V1 The ccPAS protocol was identical to the experimental conditions of Experiments 2 and 3, but participants performed a gender perception task instead of the emotion perception task. Sham Stimulation in this condition was identical to the Ctrl Sham condition of Experiment 3. Participants performed a gender perception task. Experimental condition: Exp5 STS-V1 This pSTS-to-V1/V2 ccPAS protocol was identical to the Exp2 STS-V1 and Exp3 STS-V1 conditions and aimed at replicating the effects observed in Experiments 2 and 3. Active control for direction: Ctrl V1-STS This V1/V2-to-pSTS ccPAS group was identical to the Exp3 Ctrl V1-STS condition and aimed at controlling for direction-dependent effects. In both the experimental and control conditions, the behavioral task was adapted to EEG acquisition. STS-V1 This pSTS-to-V1/V2 ccPAS protocol was identical to the Exp2 STS-V1 and Exp3 STS-V1 conditions and aimed at replicating the effects observed in Experiments 2 and 3. V1-STS This V1/V2-to-pSTS ccPAS group was identical to the Exp3 Ctrl V1-STS condition and aimed at controlling for direction-dependent effects. In both the experimental and control conditions, the behavioral task was adapted to EEG acquisition. In all experiments, the pSTS and V1/V2 sites were individually targeted using image-guided neuronavigation. The positions of the two coils were identified on each participant’s scalp using the SofTaxic Navigator System (Electro Medical Systems) as in prior research – , – . Skull landmarks (nasion, inion, and 2 preauricular points) and ~100 points providing a uniform representation of the scalp were digitized by means of a Polaris Vicra digitizer (Northern Digital). An individual estimated magnetic resonance image (MRI) was obtained for each subject through a 3D warping procedure, fitting a high-resolution MRI template with the participant’s scalp model and craniometric points. This procedure has been proven to ensure a global localization accuracy of roughly 5 mm . Stimulation sites were identified in Talairach space on the basis of previous fMRI and TMS studies. When necessary, MNI coordinates were converted into Talairach space using GingerALE v. 2.3.1. The pSTS was localized in the right hemisphere at the coordinates x = 53, y = −49, z = 10, estimated by averaging subject-weighted coordinates identified in a recent meta-analysis during emotion evaluation (75 experiments, 1742 participants) and passive observation of emotional facial expressions (20 experiments, 411 participants). The pSTS site is well in keeping with other brain imaging meta-analyses on emotional face perception , and prior TMS studies – , falling within the range of interindividual variability of the face-selective area in the pSTS reported by Sliwinska and Pitcher . To localize V1/V2, we identified the scalp location that corresponded best to early visual cortex , (x = 19, y = −98, z = 1). The pSTS and V1/V2 scalp locations identified by neuronavigation were marked with a pen on each participant’s head and used to place the coils. Then, SofTaxic automatically estimated the individual Talairach coordinates corresponding to the projection of the targeted scalp sites onto the surface of the MRI-constructed stereotaxic template. These estimated coordinates indicated the most superficial cortical site where TMS effects were expected to be maximal. The mean coordinates (± standard deviation) of the targeted pSTS cortical site corresponded to the most posterior sector of Brodmann area 21 (Experiment 1: x = 56 ± 2, y = −50 ± 2, z = 9 ± 2; Experiment 2: x = 58 ± 3, y = −50 ± 2, z = 9 ± 2; Experiment 3: x = 59 ± 3, y = −49 ± 2, z = 9 ± 1; Experiment 4: x = 57 ± 2, y = −50 ± 2, z = 9 ± 1; Experiment 5: x = 57 ± 3, y = −49 ± 2, z = 9 ± 2). The mean coordinates of the targeted V1/V2 cortical site corresponded to Brodmann area 17 in the middle occipital gyrus (Experiment 2: x = 19 ± 1, y = −96 ± 1, z = 1 ± 1; Experiment 3: x = 19 ± 1, y = −96 ± 1, z = 1 ± 2; Experiment 4: x = 18 ± 1, y = −97 ± 1, z = 0 ± 1; Experiment 5: x = 18 ± 1, y = −97 ± 1, z = 0 ± 1). In Experiment 1, ROIs were centered over the searched coordinates (pSTS-ROI: x = 53, y = −49, z = 10; V1/V2-ROI: x = 19, y = −98, z = 1). Figures , , and Supplementary Fig. display schematic representations of the stimulated sites on a standard MRI template (Colin-27) from MRIcron. Behavioral data were processed offline. Response times (RTs) were calculated after removing trials with an incorrect (~13%) or slow (≥ 1 s) response (~4%). Accuracy was converted into measures of sensitivity (d’) and response bias (β) in accordance with signal detection theory . In the emotion (or gender) perception task, two types of responses were scored as correct: a “fearful” (“male”) response to a fearful expression (male face) counted as a hit and a “happy” (“female”) response to a happy expression (female face) counted as a correct rejection. Two types of responses were scored as incorrect: a “fearful” (“male”) response to a happy expression (female face) counted as a false alarm and a “happy” (“female”) response to a fearful expression (male face) counted as a miss. To compare the effects of ccPAS across Experiments 2–4 and normalize the data distributions, changes in performance were baseline-corrected. d’ and RT values at each post-ccPAS time point (T0, T20, T40, T60, T80) and for each exposure time (17, 33, 50 ms) were divided by the corresponding baseline values, whereas post-ccPAS response bias (β) values were baseline-corrected by subtracting baseline values. The same normalization was computed in Experiment 5 for post-ccPAS (T0) values relative to baseline values of d’, RTs and β. In Experiments 2-4, mixed factors ANOVAs were performed on baseline-corrected d’, β and RT values with ccPAS as a between-subjects factor and Exposure time (17, 33, 50 ms) and Time from ccPAS (T0, T20, T40, T60, T80) as within-subjects factors. Behavioral data in Experiment 5 were analyzed using a 1-way ANOVA with the between-subjects factor ccPAS. Electrophysiological data were analyzed offline. Mixed factors ANOVAs were performed on baseline-corrected ERP peak amplitudes (T0 minus baseline) with ccPAS (Exp5 STS-V1 , Ctrl V1-STS ) as a between-subjects factor and Electrode cluster (left, right) as a within-subjects factor. Post-hoc analysis was performed using the Duncan test to correct for multiple comparisons. The Greenhouse–Geisser correction was employed where appropriate. In all the analyses, partial eta squared ( η p 2 ) was computed as a measure of effect size for the main effects and interactions, whereas Cohens’ d was computed for t -tests and post-hoc comparisons. All statistical tests were two-tailed and conducted using Statistica v. 12 (StatSoft, Inc., Tulsa). For source analysis, we estimated current source densities by projecting scalp potentials into source space using the sLORETA method , , as implemented in Brainstorm software , and the ICBM 152 MRI template, as in Experiment 1. To investigate the effect of ccPAS at the source level, we obtained within-group source activation differences (T0 minus baseline) separately for each participant. For the statistical analysis, following the recommended procedures for unconstrained source analysis, we compared the power for each source and computed between-groups power tests (F-tests). We focused on the three time windows already analyzed at the sensor level for the P1, N170 and P2 components and performed three separate statistical tests, one for each time window. The source activation values at all timepoints within each time window were averaged before the analysis. To correct for multiple comparisons, we used the False Discovery Rate method (FDR) . Further information on research design is available in the linked to this article. Supplementary Information Peer Review File Reporting Summary Source Data
Supporting Care Transformation Through a Comprehensive Graduate Medical Education Curricular Program in a Department of Internal Medicine
42bef884-bcf8-4518-a8fb-3b49be890c85
11780002
Internal Medicine[mh]
There is a growing mandate to include health systems science (HSS) principles and skills within Graduate Medical Education (GME) to equip physicians to address the evolving needs of effective healthcare delivery. – As health systems transform, physicians need to be engaged in the work of driving improvements that ensure better care outcomes and enhance health equity. These skills are particularly important in Departments of Internal Medicine (IM), which have large numbers of trainees and care for high proportions of the most vulnerable patients. Although the American Medical Association’s Accelerating Change in Medical Education consortium has helped introduce HSS curricula into medical schools, few GME programs have been organized around the unifying framework of HSS and instead have focused more narrowly on quality improvement (QI) and patient safety. , One potential reason for this gap is that the Accreditation Council for GME core competency of systems-based practice does not include a comprehensive conceptual framework, leaving notable gaps in several HSS domains. , The Department of IM at Dell Medical School at The University of Texas at Austin was established in 2016 with a key goal of advancing care transformation and improving health equity in its community. As a component of that work, Department leadership sought to establish a comprehensive HSS program that included active participation in a Dell Med School-wide educational program for all GME trainees focused on care transformation. The Dell Med Department of IM program was unique because it incorporated multiple reinforcing efforts that collectively integrated training and experiential projects across all HSS domains. This report provides a description of the Dell Med Department of IM program along with outcomes and lessons learned over 7 years of implementation. In 2016, with the establishment of the new Dell Medical School, we began planning a care transformation program within the new Department of IM. The first resident physicians were accepted to the program in 2017. Primary training sites include a safety-net hospital, a Federally Qualified Health Center (FQHC) clinic and a Veterans Affairs (VA) clinic. Between 2020 and 2023, 33 residents and fellows from the Department of IM completed training with participation in the Care Transformation program. Prior to 2016, Austin’s community-based IM residency program offered limited exposure to HSS. The Department of IM developed four major reinforcing components for its care transformation educational approach: Participation in the Dell Med School-wide Distinction in Care Transformation Program for select residents and fellows; A new VA-based Primary Care (PC) Residency track that participates in the Distinction in Care Transformation Program; Participation in the national VA Chief Resident in Quality and Safety (CRQS) Program; A new Hospital Medicine Fellowship in Quality and Safety. Dell Med School-Wide Distinction in Care Transformation Program The Distinction in Care Transformation program at Dell Med was created in 2017 as a multidisciplinary, longitudinal GME program that includes foundational coursework and experiential learning through a mentored healthcare transformation project, designed to develop physician leaders who are prepared to lead change in an evolving healthcare system. The IM track was introduced as a categorical program in the National Resident Matching Program, created by re-purposing existing categorical slots. The first cohort of IM residents participated alongside resident colleagues in pediatrics, women’s health, and general surgery. The program grew from three to five IM residents per year, and now also includes participation from the PC track residents (described below), fellows in hospital medicine and palliative medicine, and resident trainees from multiple other specialties across Dell Medical School. Health Systems Science Curriculum The didactic curriculum is overseen by a multidisciplinary team of medical educators, including Associate Chairs from several departments. Its design addresses each component from the framework of HSS, including education in high-value care, QI, interprofessional communication skills, healthcare policy and equity, leadership, implementation science, and systems thinking. In addition, the curriculum included an emphasis on design thinking taught by human-centered design experts. The formal components of the Distinction in Care Transformation program begin in the second year of residency with two week-long sessions of “Foundations Coursework” that includes the full cohort of distinction program residents and fellows across the medical school. Delivery formats include a blend of case-based learning, interactive workshops, lectures, and experiential learning delivered synchronously and asynchronously. Learning sessions are grouped by theme. For example, there were sessions on interprofessional communication skills, a workshop using the “Crucial Conversations” framework, as well as a TeamSTEPPS program. High-value care and QI are taught in interactive sessions by faculty experts, and were supplemented by a subset of online Dell Med “Discovering Value-Based Health Care Learning Modules,” which are freely available, interactive resources that teach principles and approaches to create value in healthcare. Health system leaders met with residents to discuss healthcare policy and financing. Participants are also assigned to shadow hospital and clinic leaders to gain a first-hand, one-on-one perspective of healthcare leadership, pitch project ideas, and generate high-level support for their work. Care Transformation Projects and Mentorship In addition to didactics, a key component of the program is mentored experience in applying care transformation. Participants in the Distinction Program in Care Transformation, in consultation with program leaders and coaches, may choose to either design a new mentored project or select to lead a component of an ongoing program. IM projects span the inpatient and outpatient settings, as well as the community. Trainees are matched with project mentors with diverse interests within the Department of IM. Due to institutional priorities, local faculty expertise, and relative ease of implementation, QI projects were emphasized. The program also provides data support, QI coaching, and some administrative support for preparing scholarly output. Residents may commit 1 month of dedicated time during each of the second and third years of residency to further advance their projects, as well as use time during other rotations, electives, and ambulatory blocks (for example, a dedicated QI half-day each ambulatory week). While each resident has a primary faculty mentor with specific content expertise, the resident cohorts are also coached by two Distinction Program leaders from within the Department, which ensure continued engagement. Each quarter, the Distinction Program leadership hosts a check-in dinner to build community among IM participants, discuss their projects, provide coaching, and review HSS-related literature. Participants are also required to submit progress reports twice at a regular cadence, which promotes accountability and triggers mentorship engagement and support in moving projects forward and ensuring that key program goals are met. The Distinction in Care Transformation Program concludes in May of the final year with presentations at Dell Med’s GME Research Day. A New VA-Based Primary Care Residency Track In 2018, the Department of IM, in partnership with Central Texas VA leadership, created a separate PC residency track based at the Austin VA with four residents per class. Participation in the Distinction in Care Transformation program, including both the coursework and a care transformation project, is built into this residency track. PC track residents are encouraged to develop projects based in the outpatient setting, after identifying quality gaps or areas in need of improvement in their VA clinic practice. VA clinical faculty and the CRQS provide hands-on mentorship of these projects. Department of IM leadership instituted monthly Ambulatory QI Faculty Mentor meetings, which introduced a mechanism for ensuring continued progress and faculty development to drive projects. Participation in the National VA Chief Resident in Quality and Safety Program In 2017, the Department of IM and the Central Texas VA, which is one of two ambulatory teaching sites for the Dell Med IM residency, applied for and received an ambulatory-focused CRQS position. The CRQS participates in the national VA curriculum for QI training, as well as in monthly Ambulatory QI Leadership meetings where Dell Med faculty mentors and coaches provide feedback and instruction about ongoing resident-led QI projects. The CRQS also helps lead the outpatient-based QI curricula for all residents. The CRQS serves as the primary lead for a QI project and provides direct guidance for resident-led ambulatory QI projects at the Austin VA IM Clinic. The CRQS works closely with the CRQS faculty mentor and the PC Track Program Director to assimilate into their role as a departmental leader for quality and safety. A Hospital Medicine Fellowship in Quality and Safety In 2019, we introduced a Hospital Medicine Fellowship in Quality and Safety as an integrated 1-year program designed to provide post-graduates with clinical, academic, and administrative training in hospital medicine, with an emphasis on QI, patient safety, and health equity in our safety-net teaching hospital. Fellows dedicate approximately 50% of their time to inpatient clinical work as an attending physician and 50% to experiential work leading care transformation and patient safety projects, participating in relevant hospital committees and programs, and attending faculty development sessions. As part of the fellowship, trainees also participate in the Distinction in Care Transformation Program. Palliative Care fellows were also integrated into the Distinction in Care Transformation Program in 2020. The Distinction in Care Transformation program at Dell Med was created in 2017 as a multidisciplinary, longitudinal GME program that includes foundational coursework and experiential learning through a mentored healthcare transformation project, designed to develop physician leaders who are prepared to lead change in an evolving healthcare system. The IM track was introduced as a categorical program in the National Resident Matching Program, created by re-purposing existing categorical slots. The first cohort of IM residents participated alongside resident colleagues in pediatrics, women’s health, and general surgery. The program grew from three to five IM residents per year, and now also includes participation from the PC track residents (described below), fellows in hospital medicine and palliative medicine, and resident trainees from multiple other specialties across Dell Medical School. Health Systems Science Curriculum The didactic curriculum is overseen by a multidisciplinary team of medical educators, including Associate Chairs from several departments. Its design addresses each component from the framework of HSS, including education in high-value care, QI, interprofessional communication skills, healthcare policy and equity, leadership, implementation science, and systems thinking. In addition, the curriculum included an emphasis on design thinking taught by human-centered design experts. The formal components of the Distinction in Care Transformation program begin in the second year of residency with two week-long sessions of “Foundations Coursework” that includes the full cohort of distinction program residents and fellows across the medical school. Delivery formats include a blend of case-based learning, interactive workshops, lectures, and experiential learning delivered synchronously and asynchronously. Learning sessions are grouped by theme. For example, there were sessions on interprofessional communication skills, a workshop using the “Crucial Conversations” framework, as well as a TeamSTEPPS program. High-value care and QI are taught in interactive sessions by faculty experts, and were supplemented by a subset of online Dell Med “Discovering Value-Based Health Care Learning Modules,” which are freely available, interactive resources that teach principles and approaches to create value in healthcare. Health system leaders met with residents to discuss healthcare policy and financing. Participants are also assigned to shadow hospital and clinic leaders to gain a first-hand, one-on-one perspective of healthcare leadership, pitch project ideas, and generate high-level support for their work. Care Transformation Projects and Mentorship In addition to didactics, a key component of the program is mentored experience in applying care transformation. Participants in the Distinction Program in Care Transformation, in consultation with program leaders and coaches, may choose to either design a new mentored project or select to lead a component of an ongoing program. IM projects span the inpatient and outpatient settings, as well as the community. Trainees are matched with project mentors with diverse interests within the Department of IM. Due to institutional priorities, local faculty expertise, and relative ease of implementation, QI projects were emphasized. The program also provides data support, QI coaching, and some administrative support for preparing scholarly output. Residents may commit 1 month of dedicated time during each of the second and third years of residency to further advance their projects, as well as use time during other rotations, electives, and ambulatory blocks (for example, a dedicated QI half-day each ambulatory week). While each resident has a primary faculty mentor with specific content expertise, the resident cohorts are also coached by two Distinction Program leaders from within the Department, which ensure continued engagement. Each quarter, the Distinction Program leadership hosts a check-in dinner to build community among IM participants, discuss their projects, provide coaching, and review HSS-related literature. Participants are also required to submit progress reports twice at a regular cadence, which promotes accountability and triggers mentorship engagement and support in moving projects forward and ensuring that key program goals are met. The Distinction in Care Transformation Program concludes in May of the final year with presentations at Dell Med’s GME Research Day. The didactic curriculum is overseen by a multidisciplinary team of medical educators, including Associate Chairs from several departments. Its design addresses each component from the framework of HSS, including education in high-value care, QI, interprofessional communication skills, healthcare policy and equity, leadership, implementation science, and systems thinking. In addition, the curriculum included an emphasis on design thinking taught by human-centered design experts. The formal components of the Distinction in Care Transformation program begin in the second year of residency with two week-long sessions of “Foundations Coursework” that includes the full cohort of distinction program residents and fellows across the medical school. Delivery formats include a blend of case-based learning, interactive workshops, lectures, and experiential learning delivered synchronously and asynchronously. Learning sessions are grouped by theme. For example, there were sessions on interprofessional communication skills, a workshop using the “Crucial Conversations” framework, as well as a TeamSTEPPS program. High-value care and QI are taught in interactive sessions by faculty experts, and were supplemented by a subset of online Dell Med “Discovering Value-Based Health Care Learning Modules,” which are freely available, interactive resources that teach principles and approaches to create value in healthcare. Health system leaders met with residents to discuss healthcare policy and financing. Participants are also assigned to shadow hospital and clinic leaders to gain a first-hand, one-on-one perspective of healthcare leadership, pitch project ideas, and generate high-level support for their work. In addition to didactics, a key component of the program is mentored experience in applying care transformation. Participants in the Distinction Program in Care Transformation, in consultation with program leaders and coaches, may choose to either design a new mentored project or select to lead a component of an ongoing program. IM projects span the inpatient and outpatient settings, as well as the community. Trainees are matched with project mentors with diverse interests within the Department of IM. Due to institutional priorities, local faculty expertise, and relative ease of implementation, QI projects were emphasized. The program also provides data support, QI coaching, and some administrative support for preparing scholarly output. Residents may commit 1 month of dedicated time during each of the second and third years of residency to further advance their projects, as well as use time during other rotations, electives, and ambulatory blocks (for example, a dedicated QI half-day each ambulatory week). While each resident has a primary faculty mentor with specific content expertise, the resident cohorts are also coached by two Distinction Program leaders from within the Department, which ensure continued engagement. Each quarter, the Distinction Program leadership hosts a check-in dinner to build community among IM participants, discuss their projects, provide coaching, and review HSS-related literature. Participants are also required to submit progress reports twice at a regular cadence, which promotes accountability and triggers mentorship engagement and support in moving projects forward and ensuring that key program goals are met. The Distinction in Care Transformation Program concludes in May of the final year with presentations at Dell Med’s GME Research Day. In 2018, the Department of IM, in partnership with Central Texas VA leadership, created a separate PC residency track based at the Austin VA with four residents per class. Participation in the Distinction in Care Transformation program, including both the coursework and a care transformation project, is built into this residency track. PC track residents are encouraged to develop projects based in the outpatient setting, after identifying quality gaps or areas in need of improvement in their VA clinic practice. VA clinical faculty and the CRQS provide hands-on mentorship of these projects. Department of IM leadership instituted monthly Ambulatory QI Faculty Mentor meetings, which introduced a mechanism for ensuring continued progress and faculty development to drive projects. In 2017, the Department of IM and the Central Texas VA, which is one of two ambulatory teaching sites for the Dell Med IM residency, applied for and received an ambulatory-focused CRQS position. The CRQS participates in the national VA curriculum for QI training, as well as in monthly Ambulatory QI Leadership meetings where Dell Med faculty mentors and coaches provide feedback and instruction about ongoing resident-led QI projects. The CRQS also helps lead the outpatient-based QI curricula for all residents. The CRQS serves as the primary lead for a QI project and provides direct guidance for resident-led ambulatory QI projects at the Austin VA IM Clinic. The CRQS works closely with the CRQS faculty mentor and the PC Track Program Director to assimilate into their role as a departmental leader for quality and safety. In 2019, we introduced a Hospital Medicine Fellowship in Quality and Safety as an integrated 1-year program designed to provide post-graduates with clinical, academic, and administrative training in hospital medicine, with an emphasis on QI, patient safety, and health equity in our safety-net teaching hospital. Fellows dedicate approximately 50% of their time to inpatient clinical work as an attending physician and 50% to experiential work leading care transformation and patient safety projects, participating in relevant hospital committees and programs, and attending faculty development sessions. As part of the fellowship, trainees also participate in the Distinction in Care Transformation Program. Palliative Care fellows were also integrated into the Distinction in Care Transformation Program in 2020. To assess program impact, we surveyed all IM graduates of the Distinction in Care Transformation Program, the CRQS program, the PC Residency Track, and the Hospital Medicine Fellowship in Quality and Safety in September 2023. Between 2020 and 2023, 14 IM resident physicians graduated from the Distinction in Care Transformation Program; eight residents graduated from the PC Track; three Hospital Medicine Fellows in Quality and Safety and five Palliative Care Fellows graduated; and three CRQS’s graduated. Each of these participants led or co-led a care transformation project, across a variety of topic areas and settings (Table ). In addition to leading projects as outlined in Table , the second cohort (2018–2021) of residents also stepped up in 2020 to lead and meaningfully contribute to efforts to screen for and address health-related social needs for patients with COVID-19, improve language-concordant communication for patients admitted with COVID-19, and provide additional direct care for patients in the ICU. These efforts were contributors to our high-level performance in limiting mortality from COVID-19 in 2020–2021. Thus far, all participants presented their projects locally at Dell Med GME Research Day, at least 16 have presented at national meetings, and six have published peer-reviewed articles related to their project. Sixteen participants have been selected as Chief Residents, and 18 were accepted to fellowships. Many graduates commented that they felt the skills and experiences they gained in the Distinction Program in Care Transformation impacted their career paths and increased their competitiveness for fellowship and/or faculty positions (Table ). Twenty-two of 33 graduates (67%) completed the survey. Nearly all graduates (21 of 22 respondents, 95%) reported they used skills they learned through the Distinction in Care Transformation program and/or CRQS experience. Descriptions of how they have applied these skills are provided in Table . The Care Transformation program provides real-world experience, a focus on health equity, and a HSS skillset that is increasingly valuable in modern healthcare delivery. In addition, residents produce scholarly output with national presentations and publications related to their work. They apply their new skills in future positions as chief residents, clinical fellows, faculty, and practicing physicians. While the program represents significant investment in faculty time, support, and infrastructure, components of this approach have already been adopted elsewhere, such as a similar “Health Systems Science Distinction Track” that has subsequently been introduced at UT Health San Antonio/Long School of Medicine. One key for success of the program was leveraging existing, under-deployed resources, such as using national VA programs to support the CRQS and primary care residency, as well as financing the hospital medicine fellow through clinical funds. For GME programs that cannot make as big of an investment, components can be integrated into already existing infrastructure for QI projects, resident research, and education. For example, GME-wide Grand Rounds or similar venues can be used to deliver foundational HSS content to trainees. Lessons learned from our experience underline the importance of creating multiple reinforcing components (e.g., the school-wide educational program, a CRQS, and a hospital medicine fellowship program) and setting care transformation as an organizational priority for the Department, with active involvement from the Chair and Associate Chair. Another key learning was the importance of a dedicated program leader who served as a coach through regular informal and formal check-in meetings, ensuring continued support and professional development in addition to a traditional project mentor. The Care Transformation programs across the Department of IM at Dell Medical School have set a foundation for creating real-world application of HSS within GME education and provide lessons for other programs facing the imperative to include this emerging competency within clinical training.
The effect of dexmedetomidine-ketamine combination versus dexmedetomidine on behavior of uncooperative pediatric dental patients: a randomized controlled clinical trial
b7a50b6b-ba22-4ce0-bedc-1669d7fcb73b
11464076
Dentistry[mh]
Uncooperative behavior is one of the most prevalent manifestations of dental anxiety in children and may lead to delaying or deferring treatment or to a decrease in the quality of dental care. Managing behavior of uncooperative children can be challenging and, in certain situations, unattainable by using basic behavior management techniques. The use of moderate sedation induces a more positive behavior and allows for the necessary provision of care in a compassionate manner. Pharmacological agents used must reduce risks and prevent complete loss of consciousness. Safe practice requires matching drug selection to type of the procedure and minimizing number of drugs selected. In pediatric dentistry, several sedative drugs have been used via various routes for moderate sedation. Each administration path shows its own benefits and drawbacks. Dexmedetomidine is an alpha-2 agonist and considered a promising sedative agent for pediatric patients. It was initially approved by the U.S. Food and Drug administration (FDA) in 1999 for premedication and sedation of patients in intensive care units. It shows sedative and analgesic properties that control pain, stress, and anxiety. In pediatric patients, it results in stable respiratory rate and predictable cardiovascular reactions. However, dexmedetomidine presents some disadvantages such as slow onset, induced bradycardia, and hypotension. Ketamine is a phencyclidine derivative, used in children due to its analgesic, amnesic, and hypnotic effect. It is highly effective and shows a great safety profile, preserving airway reflexes, and spontaneous ventilation. Its disadvantages include side effects of irritability, induced tachycardia, and hypertension. It is advisable to administer ketamine in conjunction with other agents to reduce its adverse effects. The drawbacks of dexmedetomidine and ketamine can both be counterbalanced when used in combination. Dexmedetomidine and ketamine combination can be administered intramuscularly, intravenously, or transmucosally. The transmucosal drug delivery system includes rectal, intranasal, buccal, or sublingual routes. Buccal route of administration, using an atomizer to deliver the drug on the buccal pouch, offers a rapid onset and bypasses the first-pass metabolism compared to oral sedation. It is considered as a painless, non-invasive procedure, being favored by children. Children may become apprehensive during dental treatment due to various stimuli, which frequently triggers the onset of psychological or physical stress. Although studies have attempted to assess stress during dental treatment using various questionnaires or physiological indices, such as blood pressure and heart rate, it is difficult to objectively measure the latent stress of dental treatment in children. Saliva has recently drawn attention as a sample for stress-related substances measurement since its collection is less invasive, safer, and easier than blood sampling. Saliva contains catecholamines, cortisol, salivary amylase, chromogranin A (CgA), and secretory immunoglobulin A (s-IgA), among other chemicals related to stress. Acute stressors can increase salivary s-IgA within 5–6 minutes after their initiation, whereas cortisol, the most widely used stress biomarker, peaks around 20–30 mins after exposure. In addition, s-IgA shows an added advantage over cortisol since it exhibits a quick decrease during recovery at approximately 30 minutes. To the best of our knowledge, no study has investigated the salivary s-IgA as stress biomarker in sedated children undergoing dental treatment. Recent literature supports that dexmedetomidine and ketamine work synergistically, which could lead to a reduction in the dosage of both sedatives. , However, there is insufficient evidence on the effect of the combination of dexmedetomidine and ketamine on the behavior of children undergoing dental treatment. Therefore, this study aims to compare the effect of sedation by dexmedetomidine-ketamine combination (DEX-KET) versus dexmedetomidine (DEX) on behavior of uncooperative pediatric dental patients using the buccal route of administration. Study design This study was a two-arm randomized clinical trial with a parallel design. It was designed and reported following the Consolidated Standards of Reporting Trials (CONSORT) guidelines. The Dental Research Ethics Committee of the Faculty of Dentistry granted the study ethical approval (IRB NO: 00010556 – IORG 0008839), and registered it in the Pan African Clinical Trials Registry database (pactr.samrc.ac.za PACTR202105602764595) before the trial or patient enrollment. Patient recruitment and data collection were conducted from August 2022 to March 2023. Guardians of all children were asked to sign an informed consent form after being given a detailed explanation about potential risks and benefits involved in the study. Sample size estimation Sample size was estimated assuming 5% alpha error and 80% study power. The percentage of ease of treatment completion was 93.8% for dexmedetomidine-ketamine and 61.54% for dexmedetomidine. Based on the difference between independent proportions, a sample size of 28 children per group was obtained, with a total sample of 56 children. The sample size was estimated using G*Power software (version 3.1.97). Study sample Study subjects included 56 healthy children aged 3–5 years who attended the outpatient clinic at the Pediatric Dentistry and Dental Public Health Department, Faculty of Dentistry, Alexandria University, for whom basic behavior management techniques had been unsuccessful to deliver the necessary dental care. Patients included were healthy children free from any systemic disease, categorized in the American Society of Anesthesiologists (ASA) as Class I or II. Participating children exhibited definitely negative and negative behavior (Frankl score 1 or 2) and needed dental treatment under local anesthesia that could be completed in less than 30 minutes. Exclusion criteria included known hypersensitivity or allergy to any of the test drugs, medically or cognitively compromised patients and children who needed extensive dental treatment requiring general anesthesia. Children were recruited after presenting the study protocol to their parents and obtaining informed consent. Grouping, randomization technique, and allocation concealment Enrolled children were randomly assigned to one of the two groups using a computer-generated list of random numbers: Group I (n = 28) patients received buccal aerosolized dexmedetomidine and ketamine combination (DEX- KET). In total, 2μg/kg DEX and 2 mg/kg KET. Group II (n = 28) patients received buccal aerosolized dexmedetomidine alone (DEX). In total, 4μg/kg DEX. Allocation was performed by using permuted block technique, with equal allocation ratio using random allocation software. Each allocation was represented by a code (the serial of the participant in the study and the group name), sealed in serially numbered opaque envelopes and delivered to the pediatric anesthesiologist (PhD), responsible for all anesthetic-related procedures in participants. Blinding The researcher (pediatric dentist performing all operative procedures and assessment of all the study outcomes), participants, and statistician were blinded to the drug regimen administered (triple blind). Only the main supervisor and the anesthesiologist were familiar with the allocation group. After data collection, the allocation group was revealed by breaking the randomization code. Intra-examiner reliability Weighted Kappa Coefficient was used to estimate the intra-examiner reliability, which was included watching videotapes of a group of 15 patients twice with six days interval between the first and second views. K was found to be 0.88 for children behavior using Ohio State University Behavior Rating Scale (OSUBRS). These patients were not involved in the clinical trial. Patient preparation A brief medical history was obtained from the guardian on the day before the dental procedure. Eligible patients were examined by the anesthesiologist to assess if they were fit for sedation and guardians were informed about the pre-sedation fasting instructions of 2, 4, and 6h of fasting for clear liquids, milk, and light meals, respectively. Child’s behavior was assessed at baseline following the OSUBRS . On the day of the appointment, vital parameters were monitored for all participants before, during treatment, and until discharge, including systolic and diastolic blood pressure (SBP and DBP, respectively) using a digital sphygmomanometer (Joytech Healthcare DBP-1231 Arm-type Fully Automatic Digital Blood Pressure Monitor ), as well as heart rate and oxygen saturation using a pediatric pulse oximeter (CONTEC CMS50D Fingertip Pulse Oximeter). Assessment of procedural induced stress (Salivary s-IgA) The first saliva sample was collected from each child while in the waiting room before drug administration using a sterile cotton roll placed sublingually for 1 minute until it soaked to collect 0.5 ml of saliva. This was inserted into a syringe, then saliva was ejected into an Eppendorf tube labeled with the code number of the patient and stored at a temperature of −80°C until analyzed. The second salivary sample was collected post operatively using the same methods. For analysis, samples were thawed and centrifuged for 10 minutes at 2,000–3,000 rpm at 2–8°C to remove particulates. Salivary s-IgA was measured using an enzyme-linked immunosorbent assay ELISA kit (Sinogeneclon Co., Ltd Human IgA Elisa Kit: Catalog No-SG-1022; Size- 96 Microwells; Lot No- 20221110; China) following the manufacturer instructions. Sedation procedure Child’s body weight was recorded at the start of the appointment to estimate the proper dose of the sedative drugs. The drugs were prepared from parenteral forms of ketamine (Ketamine 50 mg / ml, Rotexmedica, Trittau, Germany) or dexmedetomidine (Precedex 200 mcg/2ml, Hospira. Inc., Lake Forest, IL USA) with saline 0.9% added to obtain the final volume. In Group I, each drug was loaded in a different syringe. Meanwhile, in Group II, the drug dosage was divided into two identical syringes. This procedure was performed to keep the intervention group blinded. The drugs were administered by the anesthesiologist using the LMA MAD Nasal mucosal atomization device (Teleflex, Inc., Research Triangle Park, NC) connected to a 2 ml / 5 ml syringe, which transforms the solution into a fine mist, enabling uniform drug delivery. The MAD was explained to the children with the tell-show-do technique, for them to understand and follow the drug administration instructions without swallowing it. The sedative drugs were administered via the buccal mucosa or in the buccal pouch, evenly divided and sprayed into both cheeks. The child’s acceptance to the drug administered was assessed by the anesthesiologist using a 4-point rating scale . To assess anterograde amnesic effect of the sedative drugs, the child was shown a picture (Pic1: Apple) and was asked to verbally identify it immediately after drug administration (encoding Phase I). Operative procedure The operative procedure started once the child reached the drowsy or asleep stage following the Wilton, et al. sedation scale, considered a state of optimum sedation. For assessing the anterograde amnesic effect, a second picture (Pic2: train) was shown to the child, who was asked to verbally identify it (Phase1; encoding Phase II) immediately before the local anesthesia administration. Topical anesthesia (benzocaine 30 mL 20%) (Opahl-S, DHARMA RESEARCH INC., USA.) was applied for 60 seconds after drying the tissues. The dose of local anesthetic (Articaine HCL 4% and epinephrine 1:100,000) (ARTINIBSA 4%, Inibsa Dental S.L.U., Spain) was estimated according to patients’ weight. Dental treatment (Simple restorations, Pulpotomy, Stainless Steel Crown, Extraction) was standardized to be accomplished in 15–30 minutes for all patients enrolled in the study. Before discharge, the final stage of the assessment of anterograde amnesia was performed by asking the child to identify the two pictures previously shown in the encoding phases among four pictures; two target pictures (Pic1: Apple, Pic2: Train) and two distractor pictures (Pic3: Carrot, Pic4: Motorbike). This was regarded as Phase II: recall phase. Children were discharged after meeting the discharge criteria following the American Academy of Pediatric Dentistry (AAPD) guidelines. The sedation duration was assessed from the onset of sedation till complete recovery of the patient. Common procedural side effects, including agitation, bradycardia, hypotension, vomiting, and others, observed throughout the procedure were documented. Post-operative evaluation The whole procedure was videotaped, and child behavior was assessed by the operator for both groups using the OSUBRS by watching the recorded videos and attributing behavior scores at local anesthesia administration and during the operative procedure. After 24 hours, parents were contacted by phone to answer the modified Vernon, Schulman, and Foley (1966) questionnaire regarding the postoperative response of their children. Statistical analysis The Mann Whitney U test was applied for comparisons between groups regarding patient’s acceptance of drugs, children behavior during sedation, and postoperative child behavior questionnaire. Pearson’s chi-square and Fisher’s exact test were used to assess differences between groups in amnesic effects of sedative agents. Independent t test was used to compare salivary IgA between groups whereas paired t test was used to assess differences in salivary IgA before and after intervention. All tests were two tailed and the significance level was set at p≤0.05. Data were analyzed using the IBM SPSS, version 23, Armonk, NY, USA. This study was a two-arm randomized clinical trial with a parallel design. It was designed and reported following the Consolidated Standards of Reporting Trials (CONSORT) guidelines. The Dental Research Ethics Committee of the Faculty of Dentistry granted the study ethical approval (IRB NO: 00010556 – IORG 0008839), and registered it in the Pan African Clinical Trials Registry database (pactr.samrc.ac.za PACTR202105602764595) before the trial or patient enrollment. Patient recruitment and data collection were conducted from August 2022 to March 2023. Guardians of all children were asked to sign an informed consent form after being given a detailed explanation about potential risks and benefits involved in the study. Sample size was estimated assuming 5% alpha error and 80% study power. The percentage of ease of treatment completion was 93.8% for dexmedetomidine-ketamine and 61.54% for dexmedetomidine. Based on the difference between independent proportions, a sample size of 28 children per group was obtained, with a total sample of 56 children. The sample size was estimated using G*Power software (version 3.1.97). Study subjects included 56 healthy children aged 3–5 years who attended the outpatient clinic at the Pediatric Dentistry and Dental Public Health Department, Faculty of Dentistry, Alexandria University, for whom basic behavior management techniques had been unsuccessful to deliver the necessary dental care. Patients included were healthy children free from any systemic disease, categorized in the American Society of Anesthesiologists (ASA) as Class I or II. Participating children exhibited definitely negative and negative behavior (Frankl score 1 or 2) and needed dental treatment under local anesthesia that could be completed in less than 30 minutes. Exclusion criteria included known hypersensitivity or allergy to any of the test drugs, medically or cognitively compromised patients and children who needed extensive dental treatment requiring general anesthesia. Children were recruited after presenting the study protocol to their parents and obtaining informed consent. Enrolled children were randomly assigned to one of the two groups using a computer-generated list of random numbers: Group I (n = 28) patients received buccal aerosolized dexmedetomidine and ketamine combination (DEX- KET). In total, 2μg/kg DEX and 2 mg/kg KET. Group II (n = 28) patients received buccal aerosolized dexmedetomidine alone (DEX). In total, 4μg/kg DEX. Allocation was performed by using permuted block technique, with equal allocation ratio using random allocation software. Each allocation was represented by a code (the serial of the participant in the study and the group name), sealed in serially numbered opaque envelopes and delivered to the pediatric anesthesiologist (PhD), responsible for all anesthetic-related procedures in participants. The researcher (pediatric dentist performing all operative procedures and assessment of all the study outcomes), participants, and statistician were blinded to the drug regimen administered (triple blind). Only the main supervisor and the anesthesiologist were familiar with the allocation group. After data collection, the allocation group was revealed by breaking the randomization code. Weighted Kappa Coefficient was used to estimate the intra-examiner reliability, which was included watching videotapes of a group of 15 patients twice with six days interval between the first and second views. K was found to be 0.88 for children behavior using Ohio State University Behavior Rating Scale (OSUBRS). These patients were not involved in the clinical trial. A brief medical history was obtained from the guardian on the day before the dental procedure. Eligible patients were examined by the anesthesiologist to assess if they were fit for sedation and guardians were informed about the pre-sedation fasting instructions of 2, 4, and 6h of fasting for clear liquids, milk, and light meals, respectively. Child’s behavior was assessed at baseline following the OSUBRS . On the day of the appointment, vital parameters were monitored for all participants before, during treatment, and until discharge, including systolic and diastolic blood pressure (SBP and DBP, respectively) using a digital sphygmomanometer (Joytech Healthcare DBP-1231 Arm-type Fully Automatic Digital Blood Pressure Monitor ), as well as heart rate and oxygen saturation using a pediatric pulse oximeter (CONTEC CMS50D Fingertip Pulse Oximeter). The first saliva sample was collected from each child while in the waiting room before drug administration using a sterile cotton roll placed sublingually for 1 minute until it soaked to collect 0.5 ml of saliva. This was inserted into a syringe, then saliva was ejected into an Eppendorf tube labeled with the code number of the patient and stored at a temperature of −80°C until analyzed. The second salivary sample was collected post operatively using the same methods. For analysis, samples were thawed and centrifuged for 10 minutes at 2,000–3,000 rpm at 2–8°C to remove particulates. Salivary s-IgA was measured using an enzyme-linked immunosorbent assay ELISA kit (Sinogeneclon Co., Ltd Human IgA Elisa Kit: Catalog No-SG-1022; Size- 96 Microwells; Lot No- 20221110; China) following the manufacturer instructions. Child’s body weight was recorded at the start of the appointment to estimate the proper dose of the sedative drugs. The drugs were prepared from parenteral forms of ketamine (Ketamine 50 mg / ml, Rotexmedica, Trittau, Germany) or dexmedetomidine (Precedex 200 mcg/2ml, Hospira. Inc., Lake Forest, IL USA) with saline 0.9% added to obtain the final volume. In Group I, each drug was loaded in a different syringe. Meanwhile, in Group II, the drug dosage was divided into two identical syringes. This procedure was performed to keep the intervention group blinded. The drugs were administered by the anesthesiologist using the LMA MAD Nasal mucosal atomization device (Teleflex, Inc., Research Triangle Park, NC) connected to a 2 ml / 5 ml syringe, which transforms the solution into a fine mist, enabling uniform drug delivery. The MAD was explained to the children with the tell-show-do technique, for them to understand and follow the drug administration instructions without swallowing it. The sedative drugs were administered via the buccal mucosa or in the buccal pouch, evenly divided and sprayed into both cheeks. The child’s acceptance to the drug administered was assessed by the anesthesiologist using a 4-point rating scale . To assess anterograde amnesic effect of the sedative drugs, the child was shown a picture (Pic1: Apple) and was asked to verbally identify it immediately after drug administration (encoding Phase I). The operative procedure started once the child reached the drowsy or asleep stage following the Wilton, et al. sedation scale, considered a state of optimum sedation. For assessing the anterograde amnesic effect, a second picture (Pic2: train) was shown to the child, who was asked to verbally identify it (Phase1; encoding Phase II) immediately before the local anesthesia administration. Topical anesthesia (benzocaine 30 mL 20%) (Opahl-S, DHARMA RESEARCH INC., USA.) was applied for 60 seconds after drying the tissues. The dose of local anesthetic (Articaine HCL 4% and epinephrine 1:100,000) (ARTINIBSA 4%, Inibsa Dental S.L.U., Spain) was estimated according to patients’ weight. Dental treatment (Simple restorations, Pulpotomy, Stainless Steel Crown, Extraction) was standardized to be accomplished in 15–30 minutes for all patients enrolled in the study. Before discharge, the final stage of the assessment of anterograde amnesia was performed by asking the child to identify the two pictures previously shown in the encoding phases among four pictures; two target pictures (Pic1: Apple, Pic2: Train) and two distractor pictures (Pic3: Carrot, Pic4: Motorbike). This was regarded as Phase II: recall phase. Children were discharged after meeting the discharge criteria following the American Academy of Pediatric Dentistry (AAPD) guidelines. The sedation duration was assessed from the onset of sedation till complete recovery of the patient. Common procedural side effects, including agitation, bradycardia, hypotension, vomiting, and others, observed throughout the procedure were documented. The whole procedure was videotaped, and child behavior was assessed by the operator for both groups using the OSUBRS by watching the recorded videos and attributing behavior scores at local anesthesia administration and during the operative procedure. After 24 hours, parents were contacted by phone to answer the modified Vernon, Schulman, and Foley (1966) questionnaire regarding the postoperative response of their children. The Mann Whitney U test was applied for comparisons between groups regarding patient’s acceptance of drugs, children behavior during sedation, and postoperative child behavior questionnaire. Pearson’s chi-square and Fisher’s exact test were used to assess differences between groups in amnesic effects of sedative agents. Independent t test was used to compare salivary IgA between groups whereas paired t test was used to assess differences in salivary IgA before and after intervention. All tests were two tailed and the significance level was set at p≤0.05. Data were analyzed using the IBM SPSS, version 23, Armonk, NY, USA. From 81 children inspected for eligibility, 56 were recruited and randomly allocated to either the DEX-KET group or DEX group . The mean age of selected children was 4.28 ± 0.63, with 27 males (48%) and 29 females (52%). Hemodynamic results showed no statistically significant differences between the two study groups regarding: systolic blood pressure, diastolic blood pressure, heart rate, and oxygen saturation (p>0.05) . Sedation duration was significantly shorter in the DEX-KET group (40.23±8.08) minutes when compared to DEX group (55.15±13.32) minutes (p<0.0001). Regarding the drug acceptance, results showed a significantly higher acceptance in the DEX group compared to the DEX-KET combination (p = 0.005) ( and ). We found no significant differences in patient’s behavior at baseline between both groups assessed by OSUBRS (p=0.065). However, we found a statistically significant improvement in patient’s behavior scores in favor of the DEX-KET at local anesthesia administration (p=0.017) and during the operative procedure (p=0.037). The improvement in the behavior scores from baseline was significant in both groups (p<0.0001) . Drug-related adverse effects were comparable in both groups with no statistically significant differences (p>0.05). Regarding the amnesic effect of sedative drugs, we found no statistically significant differences between the study groups . Moreover, we noted no significant differences between salivary s-IgA levels before and after treatment in both DEX-KET and DEX groups (p=0.535, p=0.739, respectively) . Furthermore, the mean difference in salivary s-IgA levels between initial and final value was not statistically significant between groups (p=0.556) . We found no statistically significant differences for post-operative effects of sedation considering the scores obtained in the Vernon, Schulman, and Foley (1966) modified questionnaire, as parents reported that children’s behavior was same as before (p>0.05) . Managing uncooperative and anxious children during pediatric dental treatment may be unachievable by the traditional behavioral management techniques. For this reason, moderate sedation has been increasingly used to improve child cooperation and enable the delivery of excellent quality of dental care. Various drugs have been used to achieve this goal and, in a trial, aiming to improve the results, new combinations with various routes of administrations have been proposed. Dexmedetomidine and ketamine exhibit complementary pharmacological effects. When used together, dexmedetomidine may attenuate the tachycardia, hypertension, and salivation, emergent phenomena associated with ketamine. This combination shows low incidence of side effects and rapid recovery with no cardiorespiratory depression. However, to date, to the best of our knowledge, a comparative study of buccal administration of DEX-KET combination and DEX for sedation of pediatric dental children has not been published. Thus, a well-designed randomized controlled clinical trial is needed to evaluate and compare the effect of DEX-KET combination versus DEX on behavior and anxiety of uncooperative children requiring dental treatment. This study showed that both DEX-KET combination and DEX alone were effective in promoting hemodynamic stability. Moreover, DEX-KET combination provided rapid recovery compared to DEX alone. Previous studies have reported that dexmedetomidine-ketamine combination showed complementary pharmacological effects, as these medications hold opposing hemodynamic effects, and the addition of KET to DEX offered rapid recovery when compared to dexmedetomidine alone. , In this study, patients in DEX group showed significantly better drug acceptance than the DEX-KET group. This could be attributed to the fact that dexmedetomidine is tasteless and odorless, which helped the children accept the drug and keep it for 30 seconds in the mouth. On the other hand, the ketamine formulation has a bitter and astringent taste. The results of this study show that children in the dexmedetomidine-ketamine group showed a quieter attitude throughout local anesthesia injection and during the operative procedure, when compared to those who received dexmedetomidine alone, as assessed by the OSUBRS. It therefore appears that combining the anxiolytic effects of dexmedetomidine and analgesic effects of ketamine could improve pediatric behavior compared to using DEX alone. The superiority of combining dexmedetomidine and ketamine has been demonstrated in previous studies, which reported that this combination can improve child behavior during parental separation and venous cannulation. , Our results also corroborate a previous study by Agarwal, et al. (2023), who confirmed that the frequency of calm and cooperative behavior was higher in the DEX-KET group when compared to different drugs combinations. On the contrary, Sado-Filho, et al. (2021) found no significant differences in the percentage of calm behavior between dexmedetomidine alone and in combination with ketamine when assessing procedural sedation for pediatric dental settings. The difference in their results could be explained by their use of different intranasal drug dosages. Another study by Haider, et al. (2022) reported that DEX alone was equally efficacious to DEX-KET combination for sedation of uncooperative pediatric dental patients. This could be attributed to the administration at different dosages via intravenous route followed by a maintenance dose to keep children sedated throughout the whole procedure in both groups. Amnesia during dental treatment is highly beneficial in reducing awareness at traumatic events for pediatric patients. Children who remember less about their perioperative events hold less psychologic trauma and are expected to present fewer negative behaviors. Although KET is characterized by its amnesic effect, this study showed no statistically significant differences between groups. However, we highlight that the number of children who exhibited anterograde amnesia was higher in the DEX-KET combination than in the DEX alone. This is consistent with a study by Singh, et al. (2014), in which a significantly higher number of patients sedated with KET exhibited anterograde amnesia when compared to patients who received DEX. It could be postulated that KET would only be able to induce profound amnesia when given in full dose. This was not the case in this study, in which the dose was reduced to half of that recommended, aiming to safely combine it with DEX. Moreover, we found that both groups had a similar profile for the incidence of drug-related adverse effects, which were all minor and easily treatable. Similarly, previous studies have reported minor procedural side effects in children sedated with DEX-KET combination or DEX alone. , , Control of stress in children during dental treatment is essential to ensure the delivery of proper dental care. It has been stated that s-IgA could be a potential stress biomarker for pediatric populations. In this study, although mean s-IgA levels decreased following dental treatment as compared to pretreatment values, a significant difference has not been recorded. Likewise, mean differences between initial and final values of s-IgA were not found to be statistically significant between both groups. In this context, it could be assumed that children of both groups did not experience an event stressful enough to initiate an s-IgA response. In addition, any immediate stress that might have risen due to the procedure seemed to fade away in quite a short time in response to the drug administered, which in turn helped s-IgA levels to return to almost initial values. In accordance to this finding, two studies measuring s-IgA reactivity under acute stress found no s-IgA response to psychological stress before puberty. They suggested that children’s immune system may not respond to acute stress, in comparison to adolescents since children are born with an undeveloped immune system, which matures as they grow. , The results of this study showed that neither drug regimen exhibited any effect on postoperative behavioral responses of children. This could be related to the capacity of the drugs to reduce pain and anxiety and reduce the awareness to the details of the treatment, all of which may influence the reduction of postoperative negative behavior. Dexmedetomidine shows neuroprotective effects and is capable of modulating the stress response, which may increase long term benefits. Moreover, ketamine possesses hypnotic, analgesic, and amnesic effects, which are important features that might be very beneficial in clinical practice. Another interesting aspect of ketamine is that it is thought to produce a unique clinical state by inducing a dissociation from the environment. This induces the patient into the classic “Ketamine stare,” in which the patient looks vacantly into space with open eyes and nystagmus. However, this phenomenon was not observed in this study. This might very well be attributed to the capacity of dexmedetomidine to reduce ketamine-induced dissociative symptoms, and produce sedation that rather resembles natural sleep. Similarly, a study by Sullivan, et al. (2001), who compared 2 oral ketamine-diazepam regimens in preschool children, reported that patients did not have the eye watering and pronounced nystagmus. A possible limitation of this study was that s-IgA levels were measured at baseline and after completion of treatment but not during the procedure. A second limitation was that ketamine was administered without rendering it more tasteful by adding a palatable solution to increase drug acceptance. Further studies with more palatable preparations of ketamine and different concentrations should be studied to promote the use of buccally administered DEX-KET combination. Although all treatment procedures were accomplished in a profoundly anesthetized patient, standardization of dental treatment may be required in future studies to avoid any bias that may affect the study outcomes. Based on the previous data and within the limitations of this study, the hypothesis that there is no difference between DEX-KET and DEX administered via the buccal route regarding patient’s drug acceptance, child behavior during the procedure, amnesia, post-operative effects and procedure-induced stress as measured by s-IgA was rejected. Both dexmedetomidine-ketamine combination and dexmedetomidine alone are effective in providing hemodynamic stability. DEX-KET shows a poor taste which might limit its acceptance in buccal administration. DEX-KET combination showed superior behavioral improvement during sedation sessions. Amnesia was comparable in both regimens with no significant adverse events. Salivary s-IgA is not considered a potential stress biomarker in sedated children. Subjective post-operative questionnaire assessment revealed no negative effect on behavior 24 hours after the sedation session in both groups.
Soft tissue swelling on anterior posterior radiograph predicts subsequent displacement of song 2 lateral humeral condyle fractures
51dc523c-7316-41c6-97c2-531f66a879a6
11927239
Musculoskeletal System[mh]
Lateral humeral condyle fracture (LHCF) account for 10–20% of all distal humerus fractures, and is the second most common elbow fracture in children [ – ]. Song criteria is one of the most popular classifications of the LHCF, according to Song classification the treatment of Song 2 fracture is controversial because the stability of the fracture is difficult to determine . The rates of subsequent displacement after conservative treatment for minimally displacement LHCFs ranging from 1.2–45% [ , , ]. The integrity of the cartilage hinge is crucial in determining the stability of these fractures Although MRI , arthrography , and ultrasonography have been employed to assess the cartilage hinge, they have their respective limitations. X-ray is the basic imaging examination for fracture diagnosis, various criteria for assessing initial plain radiographs have been described, but few definitive radiographic characteristics have been identified in fractures initially measuring ≤ 2 mm to help predict subsequent displacement [ , – ]. Additionally, the radiograph providing the information not only on the bone but also on soft tissue. The Fat pad visible on a lateral radiograph of a child’s elbow following trauma is generally considered to be an intracapsular fracture of the elbow . However, the reliability of the fat pad sign in predicting subsequent displacement of LHCF is still under debate . In this study, we introduce a novel parameter to measure the swelling of lateral elbow soft tissue on anterior-posterior(AP) radiograph. Meanwhile, the aim of this study is to investigate whether the swelling of soft tissue on lateral side of elbow could be used as a predictor of the stability of Song 2 fractures. This was a retrospective study of children admitted to our hospital from January 2017 to March 2023. Institutional Review Board approval was required. Inclusion criteria were patients diagnosed with LHCF and follow-up was at least from diagnosis to fracture healing or subsequent displacement. Patients with proximal fracture of ulna or radius, elbow dislocation, distal humerus congenital or acquired deformity, coagulation dysfunction, or incomplete data were excluded. Fractures were classified by Song classification . The Song 2 fractures who underwent conservative treatment at initial were divided into 2 groups according to whether occur subsequent displacement or not: (1) displacement group, and (2) nondisplaced group. Swelling value (Fig. ) was measured on AP radiograph at the time of fracture diagnosis (swelling value 1) and healing (swelling value 2), using Picture Archiving and Communication System (PACS) imaging technology. The subsequent displacement was defined as the gap of fracture displaced over 2 mm without callus. The age of onset, laterality, sex, time of late displacement after diagnosis, and follow-up times were collected. Two independent doctors measured and collected all data. Statistical analysis The data were analyzed using IBM SPSS Statistics Version 19.0 (IBM Corporation, Armonk, NY) and R (version 4.2.3). Continuous variables were collected as the mean ± SD and ranges. Categorical data were described as frequencies with percentages and analyzed with the χ2 test. Continuous results were analyzed using the t test or analysis of variance. Binary logistic regression analysis was used to determine the risk factors for the subsequent displacement. Receiver operating characteristic curve (ROC) analysis was used to determine the cutoff of swelling value 1 for subsequent displacement in Song 2 type fracture. Internal validation was performed using bootstrap. The intraclass coefficient (ICC) with 95% confidence intervals was used to quantify agreement of the measurement of swelling value between the interobserver and intraobserver. The ICC was interpreted as follows: less than 0.4 as poor agreement, 0.4 to 0.75 as moderate agreement and values more than 0.75 as good agreement. Differences were considered significant when P < 0.05. The data were analyzed using IBM SPSS Statistics Version 19.0 (IBM Corporation, Armonk, NY) and R (version 4.2.3). Continuous variables were collected as the mean ± SD and ranges. Categorical data were described as frequencies with percentages and analyzed with the χ2 test. Continuous results were analyzed using the t test or analysis of variance. Binary logistic regression analysis was used to determine the risk factors for the subsequent displacement. Receiver operating characteristic curve (ROC) analysis was used to determine the cutoff of swelling value 1 for subsequent displacement in Song 2 type fracture. Internal validation was performed using bootstrap. The intraclass coefficient (ICC) with 95% confidence intervals was used to quantify agreement of the measurement of swelling value between the interobserver and intraobserver. The ICC was interpreted as follows: less than 0.4 as poor agreement, 0.4 to 0.75 as moderate agreement and values more than 0.75 as good agreement. Differences were considered significant when P < 0.05. Demographic data for the total of 255 patients are shown in Table . There were nearly twice as many boys (168, 65.9%) as girls (87, 34.1%). One hundred and forty (54.9%) on left side, and 115 (45.1%) on right side. The mean age was 60.7 ± 25.0 months (range, 16–156 months), and average follow-up was 2.1 ± 2.8 months (range, 0.5–42 months). The interobserver (ICC = 0.931, 0.881 to 0.960) and intraobserver (ICC = 0.941, 0.899 to 0.966) reliability was good for the measurement of swelling value. Swelling value 1 increased with the severity of the Song type, being significantly smaller in Song 2 fractures than in Song 4 and 5 ( P = 0.0002 and 0.000, respectively) (Fig. a). However, swelling value 2 remained between 46% and 48%, and slightly increase in Song 5, but there was no significant difference among the groups (Fig. b). Of the 97 Song 2 fracture patients, 51.5% (50/97)patients underwent conservative treatment at initial, and 28% (14/50) patients occurred subsequent displacement. Of the 14 patients, 11 patients underwent further surgery (Fig. ). There was no significant difference in side, sex and age between displacement and nondisplaced group. Notably, the swelling value 1 was significantly higher in displacement group than in nondisplaced group (54.4%±2.4% vs. 50.3%±3.0%, P = 0.001 ), and the mean timing of late displacement was 3.9 ± 2.4 days (range, 1–8 days) (Figs. and ). Additionally, the mean duration of immobilization in displacement group was longer than in nondisplaced group (44.8 ± 6.6 vs. 34.3 ± 7.8 days, P = 0.000 ). There was no nonunion or malunion, and motion of elbow was good in two groups. Binary logistic regression analysis was used to determine the power of the risk factors for subsequent displacement. The results showed that the swelling value 1 was significantly different ( P = 0.003) (Table ). I n addition, the ROC analysis showed that the cutoff value for the swelling value 1 was 53.0%, the specificity was 0.917, the sensitivity was 0.714, and the area under the curve was 0.863, which was significantly different ( P = 0.000) (Fig. ). Internal validation of the predict model was performed with 1000-bootstrap procedure, the C-index(concordance index) was 0.864 indicate a good discrimination. In this study, we extended our focus beyond traditional bone imaging to include soft tissue analysis, establishing a novel parameter to predict the stability of Song 2 fracture on AP radiograph. The Optimal treatment for LHCF with initial minimally displacement (≤ 2 mm) remains controversial. This controversy largely exists due to the difficulty in determining stability of the lateral condyle at the time of injury . The radiographic examination of the elbow in younger children (below 8 years of age), who have a predominantly cartilaginous composition of the capitellum and varying ossification patterns of the condyle, often fails to provide a clear visualization of the degree of fracture displacement [ , – ]. While Song et al. recommended that the internal oblique radiograph was more accurate for demonstrating the fracture gap, such imaging is not routinely performed in many children’s hospitals . Besides, according to Song classification, the stability of the Song 2 is difficult to determine . MRI , arthrography and ultrasonography were used to detect the cartilage hinge, despite their efficacy, are limited by the need for sedation or anesthesia in most children, increased medical costs, and user-dependency in the case of ultrasonography, which also causes discomfort when applied directly over the fracture site . Compared to MRI, arthrography and ultrasonography, our parameter was much universal, cheap, painless and quick. We chose the AC line at the level of metaphysis of the proximal radius for three reasons: firstly, the displacement or rotation of the capitellum in Song 4 or 5 fractures. Secondly, the inconsistent appearance of ossification in the proximal radius across patients. Thirdly, we observed that soft tissue swelling was most pronounced at this particular level. The majority of LHCF occur as an isolated injury, but it also can combine with fractures of the radial neck and proximal ulna, and the dislocation of elbow [ , , ], to avoid other factors effecting the swollen of lateral elbow soft tissues, patients with proximal fracture of ulna or radius, elbow dislocation, and coagulation dysfunction were excluded. We measured the swelling value at two time points, one is when fracture was diagnosed (swelling value 1), the other is when fracture was healed(swelling value 2). The results showed that swelling value 1 correlated with the severity of the fracture, and has an increasing trend, which means the more severity of the fracture, the larger of swelling value 1. Interestingly, our results suggest that the cutoff value of swelling value 1 is 53.0%, slightly higher than the mean for Song 3 fractures (52.5%), thereby swelling value 1 may reflect the integrity of cartilage hinge indirectly. While we did not measure the swelling value in normal elbows, the consistent range of swelling value 2 (46–48%) leads us to believe that it may represent the swelling value in normal conditions. The slight increase in swelling value 2 in Song 5 fractures could imply stronger long extensor muscles, potentially explaining why rotation occurs in these cases. Our study observed a 28% (14/50) incidence of late displacement in Song 2 fractures, a rate higher than Kraft et al. (1.2%,3/250) , Knapik et al. (14.9%, 53/355) and Bast et al.( 2.1%, 2/95) reported, this difference could be attributed to our specific focus on Song 2 fractures. The average time of late displacement was 3.9 days (range,1–8 days), aligning with findings from other studies [ , , , ]. Nonunion was the major complication of subsequent displacement, the incidence is about 14.5% (7/45) of fractures with delayed displacement went on to nonunion despite immobilization [ , – ]. In this study, there was no nonunion or malunion in displacement and nondisplaced group, but the mean duration of immobilization in displacement group was longer than nondisplaced group (44.8 ± 6.6 vs. 34.3 ± 7.8 days, P = 0.000 ). Thus, identifying a reliable parameter to predict subsequent displacement in initially minimally displaced LCHF is crucial not only to prevent complications but also to avoid overtreatment. Besides, this is the significant of our study. However, our study has limitations, including its nature as a single-center retrospective study, a limited sample of Song 2 fractures treated conservatively, and the absence of internal oblique radiographs. A multicenter prospective study with a larger patient cohort is necessary to further validate the effectiveness of the swelling value as a predictive factor. In conclusion, the swelling value correlates with the Song type and shows an increasing trend. It may serve as an indirect indicator of the integrity of the cartilage hinge and a potential reliable predictor of subsequent displacement in Song 2 fractures.
Advancing Palliative Care Integration in Hematology: Building Upon Existing Evidence
a0b880f4-e343-4a51-9669-b9f482762de6
10074347
Internal Medicine[mh]
Palliative care (PC) is defined by the World Health Organization as an “approach that improves the quality-of-life (QOL) of patients and their families facing the problems associated with life-threatening illness, through the prevention and relief of suffering by means of early identification and impeccable assessment and treatment of pain and other problems, physical, psychosocial, and spiritual” . PC specialists provide complex symptom management and family-centered biopsychosocial assessments with effective communication and focus on QOL . Though PC is often misperceived as end-of-life care, ideal PC is integrated early in the illness trajectory alongside life-prolonging or potentially curative therapies . While PC is sometimes provided by oncologists or other members of the cancer care team, referred to as “primary PC,” our use of the term refers to specialty PC services, or those provided by specialized clinicians on an interdisciplinary team. Palliative care has many established benefits for patients with cancer. The 2009 ENABLE II randomized trial paved the way for numerous studies supporting PC integration in cancer care . In 2010, Temel demonstrated that early PC led to improved QOL, improved survival, and decreased the intensity of end-of-life care, among patients with metastatic non-small-cell lung cancer . In 2015, the ENABLE III randomized trial showed that patients receiving earlier, as compared to later, PC had improved one-year survival and reduced family and caregiver burden and depression . Multiple systematic reviews and meta-analyses also demonstrate benefit from PC interventions including improvements in patient QOL, symptom burden, caregiver outcomes, advance care planning, health care utilization, and, often, patient survival [ •, , ]. Resultantly, many organizations have called for the integration of PC into routine comprehensive oncology care . Patients with HM suffer similar, or sometimes greater, symptom burdens to patients with metastatic solid tumors . Studies show that patients with blood cancers commonly present with distress and numerous physical and psychological symptoms including fatigue, insomnia, dry mouth, pain, and anxiety . Patients with HM also receive inadequate symptom management, psychological support, and engagement in advance care planning, which all contribute substantially to increased morbidity . While survival is improving, approximately 50,000 deaths annually are attributed to hematologic malignancies (HM) . As part of standard HM management, many patients receive intensive treatments, prolonged hospitalization, and oftentimes require life-long suppressive therapies . Patients with HM are more likely than those with solid tumors to receive intensive end-of-life care (e.g., chemotherapy or intensive care at end-of-life) and to die in the hospital . Patients with HM are also less likely than solid tumor patients to have documented care preferences such as advance care plans or be referred to hospice . Patients referred to hospice tend to experience shorter length of stays, which signals late referrals and limited benefit from hospice care . Patients with HM also experience significant barriers to hospice care including lack of access to blood transfusions for symptom support. Among survivors, symptoms often persist with long-term sequelae and quality-of-life implications. Survivors commonly experience fatigue, pain, neuropathy, cardiomyopathy, neurocognitive deficits, psychological distress, anticipatory grief, fear of recurrence, and post-traumatic stress . While high-quality evidence supports the integration of PC in oncology, many barriers exist . Most randomized clinical trials of PC integration have excluded patients with HM. The integration of PC into standard HM care has therefore lagged behind that of solid tumor care . Hematologic malignancy specialists are less likely than solid tumor oncologists to request specialty PC consultation . Many also equate PC to end-of-life care and may not recognize the demonstrated benefits of early PC . Furthermore, hematologic malignancy specialists may wish to address the primary PC needs of their own patients. Surveys have shown, however, that hematologic malignancy specialists often express discomfort discussing death or hospice referral, as well as a sense of shame that this transition in treatment goals may indicate a personal failure . The treatment trajectories of blood cancers also contribute hurdles for service integration. The possibility of cure is rather unique to HM, especially when compared to most other advanced cancers, and drives aggressive clinical decision-making . Patients with HM often have a rapid and unpredictable decline at the end of life, which contributes to prognostic uncertainty and challenges both clinicians and patients; more than half of patients with HM have a different understanding of their prognosis than their hematologist . Prognostic uncertainties and misperceptions about treatment risks and benefits represent unmet PC needs for patients and families with HM . Interest is growing for earlier integration of PC into hematologic malignancy care. Several recent studies demonstrate the benefits of PC integrated into HM care, yet only for some specific HM (ex. acute myeloid leukemia) and care settings (inpatient). HM are a heterogeneous group of diseases, with each patient with HM having unique needs, salient clinical features, treatment paradigms, and expected outcomes. Thus, each major disease group will likely have a different solution to the puzzle of PC-hemato-oncology integration. To follow, we summarize salient features and treatment paradigms for the various major HM disease sub-types. We then examine what is known about the PC needs specific to each disease, detail relevant studies of PC integration, and discuss the anticipated needs of each group, emphasizing areas warranting further study. A summary may be found in Table . The three main categories of HM include (1) leukemias, (2) lymphomas, and (3) multiple myeloma (MM) . For clarity and completeness, we include further subcategories including acute versus chronic leukemias, myelodysplastic syndromes and myeloproliferative neoplasms, and cellular therapies, including CAR-T therapy and hematopoietic stem cell transplantation (HSCT) (which may be performed in several disease states and have specific PC considerations). While each disease is unique, HM as a group generally share at least one of the following: The need for intensive treatments to achieve remission or cure, which is associated with risk for early mortality and/or treatment toxicity Prognostic uncertainty and unpredictable illness courses, including wide variability in outcomes including possibility of cure Sometimes chronic, indolent, and/or a relapsing and remitting course, requiring indefinite and continuous oral suppressive therapies High patient and caregiving burden (physical, emotional, and/or spiritual), even after treatment completion Treatment paradigms Leukemias are subdivided into two types: acute or chronic. Acute myeloid leukemia (AML) is the most common in adults followed by acute lymphoid leukemia (ALL). The abrupt presentation and rapidly progressive nature of acute leukemia lends itself to be more responsive to chemotherapy than solid tumors, which means a chance for cure. Cure, however, typically requires higher-dose intensity chemotherapy than typical solid tumor regimens and/or consolidative hematopoietic stem cell transplantation . This establishes a characteristic “high-risk, high-reward” treatment paradigm in acute leukemia care. Patient’s will receive intensive chemotherapy regimens, while other non-intensive treatments contributes to wide variability in treatment-associated morbidity and mortality. HSCT is considered for many patients and is discussed in a separate section below. Symptom burden and palliative care needs Acute leukemia may be among the most psychologically distressing of all cancers . The intensive treatments for acute leukemias necessitate weeks long and socially isolating hospitalizations . Treatments can carry high risk of death and treatment-related toxicities. Physical symptoms may be severe and include fevers, fatigue, mucositis, and other distressing gastrointestinal symptoms . Treatment effects may be long-lasting, contributing to emotional and psychological symptoms and worsened QOL . A secondary data analysis of 160 patients with AML found that a substantial proportion reported clinically significant post-traumatic stress symptoms one month after intensive chemotherapy . Approximately, a third of patients will report significant depressive or anxiety symptoms. Another one-third experience acute stress reactions from the shock of the diagnosis and unexpected urgent hospitalization . In addition to the high burden of physical and emotional symptoms, patients with acute leukemias have additional unmet PC-related needs. One study revealed that while 86% of AML patients were expected by their oncologists to have a poor prognosis, 74% of these patients reported at least a 50% chance of cure . Older adults with AML over-estimate their prognosis by threefold . This skew towards optimistic prognostication and aggressive care leads to increased healthcare utilization at the end of life. Patients with acute leukemia are more likely to choose aggressive therapies and die in the hospital, while accessing PC services less frequently than those patients with advanced solid tumors . A study of 168 deceased patients with acute leukemia revealed that 66.7% were hospitalized in the last week of life and over half received chemotherapy in the last 30 days of life . Another study of 200 leukemia patients reported the median time from last code status transition to death was only two days. Thirty-two percent (32%) of those code status conversations occurred at the time of clinical deterioration and 39.5% without the patient present or capable of making their own medical decisions . Thus, there are significant opportunities for improvement in advance care planning and symptom management in acute leukemia care. Palliative care integration In 2021, El-Jawahri published a multisite randomized clinical trial of 160 adults with AML undergoing intensive chemotherapy showing that integrated specialty PC significantly improves patient-reported QOL, depression, anxiety, and posttraumatic stress symptoms. Among the patients who died, those receiving integrated PC were more likely to have discussed their end-of-life preferences and less likely to receive chemotherapy at end-of-life [ ••, ]. Smaller studies reinforce the benefits of integrated PC in acute leukemia care, showing increased hospice use and fewer intensive care unit admissions . Next steps For patients with AML receiving intensive induction chemotherapy, the evidence of benefit of PC integration early in the disease course is clear. PC should be involved at the time of admission for induction chemotherapy or index hospitalization. PC services have much to offer including providing symptom management throughout the hospitalization, caregiver encouragement, support during potential HSCT, and, if needed, end-of-life care . Future work is needed in this area with at least one major clinical trial underway. SPRINT is an active multisite randomized controlled clinical trial examining collaborative palliative and leukemia care versus standard leukemia care alone in patients with AML and high risk MDS receiving non-intensive chemotherapy . Leukemias are subdivided into two types: acute or chronic. Acute myeloid leukemia (AML) is the most common in adults followed by acute lymphoid leukemia (ALL). The abrupt presentation and rapidly progressive nature of acute leukemia lends itself to be more responsive to chemotherapy than solid tumors, which means a chance for cure. Cure, however, typically requires higher-dose intensity chemotherapy than typical solid tumor regimens and/or consolidative hematopoietic stem cell transplantation . This establishes a characteristic “high-risk, high-reward” treatment paradigm in acute leukemia care. Patient’s will receive intensive chemotherapy regimens, while other non-intensive treatments contributes to wide variability in treatment-associated morbidity and mortality. HSCT is considered for many patients and is discussed in a separate section below. Acute leukemia may be among the most psychologically distressing of all cancers . The intensive treatments for acute leukemias necessitate weeks long and socially isolating hospitalizations . Treatments can carry high risk of death and treatment-related toxicities. Physical symptoms may be severe and include fevers, fatigue, mucositis, and other distressing gastrointestinal symptoms . Treatment effects may be long-lasting, contributing to emotional and psychological symptoms and worsened QOL . A secondary data analysis of 160 patients with AML found that a substantial proportion reported clinically significant post-traumatic stress symptoms one month after intensive chemotherapy . Approximately, a third of patients will report significant depressive or anxiety symptoms. Another one-third experience acute stress reactions from the shock of the diagnosis and unexpected urgent hospitalization . In addition to the high burden of physical and emotional symptoms, patients with acute leukemias have additional unmet PC-related needs. One study revealed that while 86% of AML patients were expected by their oncologists to have a poor prognosis, 74% of these patients reported at least a 50% chance of cure . Older adults with AML over-estimate their prognosis by threefold . This skew towards optimistic prognostication and aggressive care leads to increased healthcare utilization at the end of life. Patients with acute leukemia are more likely to choose aggressive therapies and die in the hospital, while accessing PC services less frequently than those patients with advanced solid tumors . A study of 168 deceased patients with acute leukemia revealed that 66.7% were hospitalized in the last week of life and over half received chemotherapy in the last 30 days of life . Another study of 200 leukemia patients reported the median time from last code status transition to death was only two days. Thirty-two percent (32%) of those code status conversations occurred at the time of clinical deterioration and 39.5% without the patient present or capable of making their own medical decisions . Thus, there are significant opportunities for improvement in advance care planning and symptom management in acute leukemia care. In 2021, El-Jawahri published a multisite randomized clinical trial of 160 adults with AML undergoing intensive chemotherapy showing that integrated specialty PC significantly improves patient-reported QOL, depression, anxiety, and posttraumatic stress symptoms. Among the patients who died, those receiving integrated PC were more likely to have discussed their end-of-life preferences and less likely to receive chemotherapy at end-of-life [ ••, ]. Smaller studies reinforce the benefits of integrated PC in acute leukemia care, showing increased hospice use and fewer intensive care unit admissions . For patients with AML receiving intensive induction chemotherapy, the evidence of benefit of PC integration early in the disease course is clear. PC should be involved at the time of admission for induction chemotherapy or index hospitalization. PC services have much to offer including providing symptom management throughout the hospitalization, caregiver encouragement, support during potential HSCT, and, if needed, end-of-life care . Future work is needed in this area with at least one major clinical trial underway. SPRINT is an active multisite randomized controlled clinical trial examining collaborative palliative and leukemia care versus standard leukemia care alone in patients with AML and high risk MDS receiving non-intensive chemotherapy . Treatment paradigms In direct contrast with acute leukemias, chronic leukemias often present asymptomatically and have a slow disease course. Chronic myeloid leukemia (CML) and chronic lymphoid leukemia (CLL) are the most common types in adults. In general, prognosis is measured in years, sometimes even decades. Typical cases of CML are treated with oral tyrosine kinase inhibitors, which typically confer an excellent prognosis and often well-tolerated side effects. While prognosis is good, patients with CML often require indefinite and continuous oral targeted therapy, which has psychological, financial, and other implications. This ‘treatment-without-end’ paradigm is characteristic of CML care. Atypical cases of CML, such as those with resistant mutations or those that transform to AML, may have shortened survival or experience the effects described previously related to the transformed acute leukemia . Some patients face severe side effects or tolerability issues from their treatment. CLL is typified by older patient age and a slow-growing, indolent nature, with most patients presenting initially asymptomatic. The default management strategy for those with less aggressive variants is ‘active surveillance’ without treatment. Sometimes CLL can transform into an aggressive and life-limiting variant or aggressive lymphoma (“Richter’s transformation”), which indicates a poor prognosis. Regardless, patients with CLL can face frequent infectious and nosocomial complications, the need for hospitalization or intermittent treatments unpredictably, and, as with other HM, difficult prognostication . Some require indefinite oral therapies, at significant financial cost and sometimes with unfavorable side effect profiles . Symptom burden and palliative care needs There is marked clinical heterogeneity in how chronic leukemia variants may impacts a patient’s life. Patients with chronic leukemias may suffer severe toxicities and symptoms, often related to the chronicity of the illness and/or treatment. An international survey of 1482 patients with CLL found significantly worse emotional well-being in those with CLL than other patients with cancer . Life-long suppressive treatments, uncertainty related to the timing and severity of inevitable relapse, and nosocomial complications all contribute to detriments in physical, emotional, and financial well-being. Palliative care integration and next steps While it is recognized that patients with chronic leukemias experience significant symptom burden, often stemming from the cumulative toxicities of decades of continuous oral targeted therapies and recurring relapses, little has been studied regarding PC integration into standard chronic leukemia care. PC specialists could provide an extra layer of support for patients with chronic leukemias, particularly regarding the need for psycho-oncologic support interventions for patients with CLL and enhanced symptom management . Furthermore, there may be opportunity to provide support around the experience of living with a chronic illness, which may be experienced as a sword of Damocles or having the potential for transformation or progression in a manner that is difficult to predict. It is not clear, however, that every patient with a chronic leukemia needs or would benefit from specialist palliative care services. Perhaps those with CLL may do well with geriatrics or social work support with periodic PC consultation for those with specific and challenging PC needs. More research is needed to better understand the needs and the optimal involvement of PC in this population. In direct contrast with acute leukemias, chronic leukemias often present asymptomatically and have a slow disease course. Chronic myeloid leukemia (CML) and chronic lymphoid leukemia (CLL) are the most common types in adults. In general, prognosis is measured in years, sometimes even decades. Typical cases of CML are treated with oral tyrosine kinase inhibitors, which typically confer an excellent prognosis and often well-tolerated side effects. While prognosis is good, patients with CML often require indefinite and continuous oral targeted therapy, which has psychological, financial, and other implications. This ‘treatment-without-end’ paradigm is characteristic of CML care. Atypical cases of CML, such as those with resistant mutations or those that transform to AML, may have shortened survival or experience the effects described previously related to the transformed acute leukemia . Some patients face severe side effects or tolerability issues from their treatment. CLL is typified by older patient age and a slow-growing, indolent nature, with most patients presenting initially asymptomatic. The default management strategy for those with less aggressive variants is ‘active surveillance’ without treatment. Sometimes CLL can transform into an aggressive and life-limiting variant or aggressive lymphoma (“Richter’s transformation”), which indicates a poor prognosis. Regardless, patients with CLL can face frequent infectious and nosocomial complications, the need for hospitalization or intermittent treatments unpredictably, and, as with other HM, difficult prognostication . Some require indefinite oral therapies, at significant financial cost and sometimes with unfavorable side effect profiles . There is marked clinical heterogeneity in how chronic leukemia variants may impacts a patient’s life. Patients with chronic leukemias may suffer severe toxicities and symptoms, often related to the chronicity of the illness and/or treatment. An international survey of 1482 patients with CLL found significantly worse emotional well-being in those with CLL than other patients with cancer . Life-long suppressive treatments, uncertainty related to the timing and severity of inevitable relapse, and nosocomial complications all contribute to detriments in physical, emotional, and financial well-being. While it is recognized that patients with chronic leukemias experience significant symptom burden, often stemming from the cumulative toxicities of decades of continuous oral targeted therapies and recurring relapses, little has been studied regarding PC integration into standard chronic leukemia care. PC specialists could provide an extra layer of support for patients with chronic leukemias, particularly regarding the need for psycho-oncologic support interventions for patients with CLL and enhanced symptom management . Furthermore, there may be opportunity to provide support around the experience of living with a chronic illness, which may be experienced as a sword of Damocles or having the potential for transformation or progression in a manner that is difficult to predict. It is not clear, however, that every patient with a chronic leukemia needs or would benefit from specialist palliative care services. Perhaps those with CLL may do well with geriatrics or social work support with periodic PC consultation for those with specific and challenging PC needs. More research is needed to better understand the needs and the optimal involvement of PC in this population. Treatment paradigms MM is the second most common hematologic malignancy. While survival is improving with the advent of new therapies, MM remains generally incurable. The disease course is typified by periods of remission and relapse. The time in remission before relapse varies greatly depending on disease phenotype the clinical aggressiveness of the MM, though in ideal circumstances can last years. Patients often remain on maintenance therapies even when in remission. MM patients commonly are treated with five or more lines of therapy, including HSCT and multidrug regimens . Patients with ‘standard risk’ MM may be expected to live approximately 5–10 or more years . Symptom burden and palliative care needs MM patients receive indefinite therapy and must cope with the relapsing and remitting disease course, cumulative toxicities, and chronic survivorship. Symptoms, both physical and psychological, stem from the snowballing effects of treatments, the expectation and timing uncertainty of inevitable relapse, and the need for recurrent treatments . Patients with MM have been described as having worse physical function and global mental health than the general population, as well as diminished health-related QOL when compared to those with other HM . Physically, patients experience fatigue, pain, breathlessness, nausea, muscle weakness, and peripheral neuropathy . Psychologically, patients report the impact of social isolation, financial stress, relationship strain, anticipatory grief, and the toll of endless and unrelenting treatment. A recent cross-sectional, multisite study of 180 MM patients reported that nearly 25% of patients reported clinically significant depression, anxiety, and post-traumatic stress symptoms . There is growing evidence that the caregivers of patients with MM also struggle with psychological symptoms and could benefit from PC support. A cross-sectional, multisite study of 127 MM caregivers revealed that 44.1% have clinically significant anxiety, while another 24.4% reported post-traumatic stress symptoms. Caregivers reported higher rates of anxiety than the patients with MM themselves . Prognostic misunderstanding and patient–provider communication was thought to be a major contributor of stress for caregivers of patients with MM in this study. Palliative care integration and next steps Despite the high illness burden experienced by both MM patients and caregivers, there is a paucity of data on models for PC integration in the care of MM patients. PC integration research is needed in the MM population. During periods of disease progression, the possible benefits of PC involvement seem clear. During the prolonged periods of disease control, however, the ideal integration and involvement of palliative services is less apparent. We suspect that patients may still benefit from symptom assessments, psychosocial support, and assistance with coping during this period marked by survivorship and the anticipation of inevitable relapse. Assessing for unmet palliative-related needs and consideration of PC involvement is crucial throughout the trajectory of care. MM is the second most common hematologic malignancy. While survival is improving with the advent of new therapies, MM remains generally incurable. The disease course is typified by periods of remission and relapse. The time in remission before relapse varies greatly depending on disease phenotype the clinical aggressiveness of the MM, though in ideal circumstances can last years. Patients often remain on maintenance therapies even when in remission. MM patients commonly are treated with five or more lines of therapy, including HSCT and multidrug regimens . Patients with ‘standard risk’ MM may be expected to live approximately 5–10 or more years . MM patients receive indefinite therapy and must cope with the relapsing and remitting disease course, cumulative toxicities, and chronic survivorship. Symptoms, both physical and psychological, stem from the snowballing effects of treatments, the expectation and timing uncertainty of inevitable relapse, and the need for recurrent treatments . Patients with MM have been described as having worse physical function and global mental health than the general population, as well as diminished health-related QOL when compared to those with other HM . Physically, patients experience fatigue, pain, breathlessness, nausea, muscle weakness, and peripheral neuropathy . Psychologically, patients report the impact of social isolation, financial stress, relationship strain, anticipatory grief, and the toll of endless and unrelenting treatment. A recent cross-sectional, multisite study of 180 MM patients reported that nearly 25% of patients reported clinically significant depression, anxiety, and post-traumatic stress symptoms . There is growing evidence that the caregivers of patients with MM also struggle with psychological symptoms and could benefit from PC support. A cross-sectional, multisite study of 127 MM caregivers revealed that 44.1% have clinically significant anxiety, while another 24.4% reported post-traumatic stress symptoms. Caregivers reported higher rates of anxiety than the patients with MM themselves . Prognostic misunderstanding and patient–provider communication was thought to be a major contributor of stress for caregivers of patients with MM in this study. Despite the high illness burden experienced by both MM patients and caregivers, there is a paucity of data on models for PC integration in the care of MM patients. PC integration research is needed in the MM population. During periods of disease progression, the possible benefits of PC involvement seem clear. During the prolonged periods of disease control, however, the ideal integration and involvement of palliative services is less apparent. We suspect that patients may still benefit from symptom assessments, psychosocial support, and assistance with coping during this period marked by survivorship and the anticipation of inevitable relapse. Assessing for unmet palliative-related needs and consideration of PC involvement is crucial throughout the trajectory of care. Treatment paradigms Lymphomas are a heterogenous group of diseases. Lymphomas are divided into Hodgkin’s lymphoma (HL) and non-Hodgkin’s lymphoma (NHL). While HL have a chance of cure with intensive chemotherapy and a fair prognosis overall (approximately 90% 5-year survival), NHLs vary in severity, treatment responsiveness, and prognosis . There are over 3 dozen subtypes of NHL with presentations spanning the full range from indolent to aggressive. Aggressive NHL tends to respond favorably to chemotherapy and may be curable. Diffuse large B cell lymphoma (DLBCL), for example, carries an expected cure rate around 40–50% with multiagent chemotherapies. However, there are genetic and other risk factors, such as relapsed DLBCL, which may confer worse prognosis, especially if unresponsive to initial treatments . Indolent NHL, such as “follicular lymphoma,” may be incurable, but often with long expected survival rates. Indolent NHL may be experienced similarly to patients with some chronic leukemias in that the initial discovery of the disease may be met with “active surveillance” and without treatment. In general, the median survival is often greater than ten years. After treatment courses, patients usually experience periods of remission, sometimes lasting years, but disease relapse and progression are inevitable. Over time, multiple relapses and lines of treatment devolve into diminished treatment responsiveness, progressive decline, and steady disease progression. As with chronic leukemias, “transformation” to a phenotypically more aggressive variant with poor prognosis is possible . Symptom burden and palliative care needs Patients with lymphoma experience high symptom burden and PC-related needs. In HL patients, emotional and physical distress is common . NHL patients experience high rates of financial toxicity and physical symptoms. Fatigue, in particular, can be severe, debilitating, and persistent even in survivorship . More than 50% of NHL patients experience substantial treatment toxicities and high care utilization at the end of life . One study of 91 older NHL patients with aggressive disease demonstrated that in the last 30 days of life, 70% were hospitalized, one-third received systemic therapies, nearly one-quarter underwent admission to an intensive care unit, and more than half died in a healthcare facility. Fewer than half of these patients received PC consultation and even fewer were referred to hospice . As newer treatments bring improved prognosis for both types of lymphoma, patients are increasingly having to contend with issues of survivorship including persistent physical symptoms, post-traumatic stress, and financial toxicity . Palliative care integration and next steps Robust clinical trials examining PC integration in lymphoma care have not been conducted to date. Further research is needed to identify the optimal approach to PC integration in lymphoma care. We suspect that lymphoma patients may benefit from symptom screening, advance care planning, and an extra layer of support at numerous time points throughout the disease course . Lymphomas are a heterogenous group of diseases. Lymphomas are divided into Hodgkin’s lymphoma (HL) and non-Hodgkin’s lymphoma (NHL). While HL have a chance of cure with intensive chemotherapy and a fair prognosis overall (approximately 90% 5-year survival), NHLs vary in severity, treatment responsiveness, and prognosis . There are over 3 dozen subtypes of NHL with presentations spanning the full range from indolent to aggressive. Aggressive NHL tends to respond favorably to chemotherapy and may be curable. Diffuse large B cell lymphoma (DLBCL), for example, carries an expected cure rate around 40–50% with multiagent chemotherapies. However, there are genetic and other risk factors, such as relapsed DLBCL, which may confer worse prognosis, especially if unresponsive to initial treatments . Indolent NHL, such as “follicular lymphoma,” may be incurable, but often with long expected survival rates. Indolent NHL may be experienced similarly to patients with some chronic leukemias in that the initial discovery of the disease may be met with “active surveillance” and without treatment. In general, the median survival is often greater than ten years. After treatment courses, patients usually experience periods of remission, sometimes lasting years, but disease relapse and progression are inevitable. Over time, multiple relapses and lines of treatment devolve into diminished treatment responsiveness, progressive decline, and steady disease progression. As with chronic leukemias, “transformation” to a phenotypically more aggressive variant with poor prognosis is possible . Patients with lymphoma experience high symptom burden and PC-related needs. In HL patients, emotional and physical distress is common . NHL patients experience high rates of financial toxicity and physical symptoms. Fatigue, in particular, can be severe, debilitating, and persistent even in survivorship . More than 50% of NHL patients experience substantial treatment toxicities and high care utilization at the end of life . One study of 91 older NHL patients with aggressive disease demonstrated that in the last 30 days of life, 70% were hospitalized, one-third received systemic therapies, nearly one-quarter underwent admission to an intensive care unit, and more than half died in a healthcare facility. Fewer than half of these patients received PC consultation and even fewer were referred to hospice . As newer treatments bring improved prognosis for both types of lymphoma, patients are increasingly having to contend with issues of survivorship including persistent physical symptoms, post-traumatic stress, and financial toxicity . Robust clinical trials examining PC integration in lymphoma care have not been conducted to date. Further research is needed to identify the optimal approach to PC integration in lymphoma care. We suspect that lymphoma patients may benefit from symptom screening, advance care planning, and an extra layer of support at numerous time points throughout the disease course . Treatment paradigms Myelodysplastic syndromes (MDS) and myeloproliferative neoplasms (MPN) are another widely heterogeneous group of disorders. These disease processes result from mutations occurring in the stem cells of the bone marrow. It is possible, though not necessary, for one patient to have features of both MPN and MDS. MPN are a sub-type of various HM in which the bone marrow cancerously produces leukocytes, erythrocytes, or platelets, leading to (a) CML (as described previously), (b) polycythemia vera, (c) essential thrombocythemia, or (d) myelofibrosis, respectively. MDS occurs when the bone marrow fails to produce appropriate quantities of mature and functional blood cells and, instead, produces immature and dysplastic cells. There are several variants of MDS with substantial phenotypic variability. Higher-risk MDS confers a bleak prognosis with rapid disease progression, high risk of transformation to AML, and poor long-term survival. Presently, the only potentially curative therapy for MDS is HSCT, which carries its own risks and associated burden as described in a separate section below. Patients with MDS frequently require blood transfusions and other invasive treatments. While many new targeted therapies are available for other myeloid diseases like AML and CML, there are few approved treatment options for MDS and MPNs . Symptom burden and palliative care needs MPN and MDS patients experience significantly diminished health-related QOL. Fatigue and dyspnea are common and debilitating physical symptoms . Blood transfusions and HSCT have been found to be helpful in prolonging life and sometimes reducing symptoms of fatigue and dyspnea, though each also impart their own risks. Blood transfusion dependence is onerous—physically, emotionally, and financially. Patients may also experience unexpected urgent hospitalization for bleeding, complications, infections, or transformation to secondary AML. Psychological symptoms are thought to be common, though data are lacking. Patients with high-risk disease, functional impairment, and transfusion dependence carry higher risks of anxiety and depression. Caregivers may suffer similar to worse mental health outcomes when compared to MDS patients . Palliative care integration and next steps There have been no randomized trials to systematically study PC interventions in patients with MDS or MPN. We suspect that that patients with MDS and MPNs and their caregivers would benefit from the development of interventions aimed at promoting serious illness conversations, addressing symptom burden, and alleviating psychological distress. At the end of life, one major area of improvement could be addressing transfusion dependence as an exclusion criterion for hospice care, which impedes many MDS and MPN (as well as leukemias and other HM) patients from engaging with and benefiting from these services. Research is needed to implement supportive and psychosocial interventions for these patients and families. Symptom assessment and consideration for palliative care referral should be pursued throughout the continuum of MDS and MPN care. Myelodysplastic syndromes (MDS) and myeloproliferative neoplasms (MPN) are another widely heterogeneous group of disorders. These disease processes result from mutations occurring in the stem cells of the bone marrow. It is possible, though not necessary, for one patient to have features of both MPN and MDS. MPN are a sub-type of various HM in which the bone marrow cancerously produces leukocytes, erythrocytes, or platelets, leading to (a) CML (as described previously), (b) polycythemia vera, (c) essential thrombocythemia, or (d) myelofibrosis, respectively. MDS occurs when the bone marrow fails to produce appropriate quantities of mature and functional blood cells and, instead, produces immature and dysplastic cells. There are several variants of MDS with substantial phenotypic variability. Higher-risk MDS confers a bleak prognosis with rapid disease progression, high risk of transformation to AML, and poor long-term survival. Presently, the only potentially curative therapy for MDS is HSCT, which carries its own risks and associated burden as described in a separate section below. Patients with MDS frequently require blood transfusions and other invasive treatments. While many new targeted therapies are available for other myeloid diseases like AML and CML, there are few approved treatment options for MDS and MPNs . MPN and MDS patients experience significantly diminished health-related QOL. Fatigue and dyspnea are common and debilitating physical symptoms . Blood transfusions and HSCT have been found to be helpful in prolonging life and sometimes reducing symptoms of fatigue and dyspnea, though each also impart their own risks. Blood transfusion dependence is onerous—physically, emotionally, and financially. Patients may also experience unexpected urgent hospitalization for bleeding, complications, infections, or transformation to secondary AML. Psychological symptoms are thought to be common, though data are lacking. Patients with high-risk disease, functional impairment, and transfusion dependence carry higher risks of anxiety and depression. Caregivers may suffer similar to worse mental health outcomes when compared to MDS patients . There have been no randomized trials to systematically study PC interventions in patients with MDS or MPN. We suspect that that patients with MDS and MPNs and their caregivers would benefit from the development of interventions aimed at promoting serious illness conversations, addressing symptom burden, and alleviating psychological distress. At the end of life, one major area of improvement could be addressing transfusion dependence as an exclusion criterion for hospice care, which impedes many MDS and MPN (as well as leukemias and other HM) patients from engaging with and benefiting from these services. Research is needed to implement supportive and psychosocial interventions for these patients and families. Symptom assessment and consideration for palliative care referral should be pursued throughout the continuum of MDS and MPN care. Treatment paradigms Chimeric Antigen Receptor (CAR) T cell therapy is an exciting new treatment which is approved for specific hematologic malignancy management, specifically MM, B cell NHL, or ALL . CAR T cell therapy represents a largely unexplored area of palliative-hemato-oncologic care. Little is known about CAR T cell-associated PC needs. Anecdotally, however, these patients often have advanced disease, a generally poor prognosis, and significant symptoms, while awaiting the receipt of CAR T cell therapy. Yet CAR-T therapy can be highly successful at achieving short-term and even sometimes long-term remissions. Manufacturing turnaround time and arduous cell collection requirements for CAR T cell therapy create significant delays and logistical issues in caring for these patients. Symptom burden and palliative care needs CAR T cell therapy carries a risk for cytokine release syndrome, neurotoxicity, and other physical symptoms from treatment including pain, fatigue, and anorexia which may last months after treatment . Early studies of CAR T cell therapy show that these patients experience substantial healthcare utilization, especially at the end-of-life. One study reported that among descendants of CAR T cell therapy, most were hospitalized within 30 days of death, died in a hospital setting, and did not receive PC or hospice services . Furthermore, a recent study found that CAR T cell therapy patients report overly optimistic prognostic impressions and have high rates of psychological distress . Palliative care integration and next steps Integration of PC interventions for patients receiving CAR-T lack current evidence, perhaps owing to its novelty. Our clinical experience is that CAR T cell patients have similar health-related experiences to those patients with acute leukemia in that they are experiencing a “high-risk, high-reward” treatment, which is associated with prolonged hospitalizations, iatrogenic symptoms, and the potential for psychological distress. Available, though limited, data suggests a need for psychosocial interventions to support patient coping . Future research integrating PC into CAR T therapy from treatment planning through survivorship or death could help mitigate the substantial burden of treatment toxicity, prognostic uncertainty, and prolonged hospitalization. Chimeric Antigen Receptor (CAR) T cell therapy is an exciting new treatment which is approved for specific hematologic malignancy management, specifically MM, B cell NHL, or ALL . CAR T cell therapy represents a largely unexplored area of palliative-hemato-oncologic care. Little is known about CAR T cell-associated PC needs. Anecdotally, however, these patients often have advanced disease, a generally poor prognosis, and significant symptoms, while awaiting the receipt of CAR T cell therapy. Yet CAR-T therapy can be highly successful at achieving short-term and even sometimes long-term remissions. Manufacturing turnaround time and arduous cell collection requirements for CAR T cell therapy create significant delays and logistical issues in caring for these patients. CAR T cell therapy carries a risk for cytokine release syndrome, neurotoxicity, and other physical symptoms from treatment including pain, fatigue, and anorexia which may last months after treatment . Early studies of CAR T cell therapy show that these patients experience substantial healthcare utilization, especially at the end-of-life. One study reported that among descendants of CAR T cell therapy, most were hospitalized within 30 days of death, died in a hospital setting, and did not receive PC or hospice services . Furthermore, a recent study found that CAR T cell therapy patients report overly optimistic prognostic impressions and have high rates of psychological distress . Integration of PC interventions for patients receiving CAR-T lack current evidence, perhaps owing to its novelty. Our clinical experience is that CAR T cell patients have similar health-related experiences to those patients with acute leukemia in that they are experiencing a “high-risk, high-reward” treatment, which is associated with prolonged hospitalizations, iatrogenic symptoms, and the potential for psychological distress. Available, though limited, data suggests a need for psychosocial interventions to support patient coping . Future research integrating PC into CAR T therapy from treatment planning through survivorship or death could help mitigate the substantial burden of treatment toxicity, prognostic uncertainty, and prolonged hospitalization. Treatment paradigms Hematopoietic stem cell transplantation (HSCT) is an intensive and potentially curative treatment for many HM. Between 1957 and 2019, there have been more than 1.5 million HSCT procedures performed worldwide . HSCT requires preparative chemotherapy, which is typically delivered during an often prolonged and intensive index hospitalization. Many patients who undergo the procedure, especially those receiving allogeneic transplants, develop complications including graft-versus-host-disease (GVHD) . Autologous HSCT is less risky and poses no risk of GVHD, but still requires high-dose chemotherapy prior to stem cell rescue. Of the diseases specifically discussed in this article, HSCT is commonly performed for patients with AML, ALL, aggressive lymphomas, MM, or MDS. Autologous HSCT is mostly performed in MM and NHL. We have opted to discuss HSCT separately from these other diseases because these patients have considerable symptoms and specific PC needs. Furthermore, PC integration in stem cell transplant care is an active area of study. Symptom burden and palliative care needs Patients undergoing HSCT have unmet PC needs . HSCT is associated with low health-related QOL and high physical and psychological symptom burden . Physical symptoms are common and sometimes debilitating, perhaps comparable to patients with acute leukemias undergoing intensive high-dose chemotherapies. Psychologically, patients who undergo HSCT are highly likely to develop post-traumatic stress symptoms due to their treatment experience . The prolonged, socially isolating hospitalizations are associated with decreased patient-reported QOL, elevated levels of anxiety, and depressive symptoms including pronounced anhedonia . Thirty-seven percent (37%) of HSCT patients meet criteria for clinically significant depressive symptoms the week after transplant . Patients describe feeling trapped, fearful, discouraged, and powerless . Psychological stressors have been linked to higher risks of GVHD and decreased overall survival . Social isolation has only been intensified by the COVID-19 pandemic . We are just beginning to understand the effect of HSCT on patients’ families and caregivers. Caregivers of HSCT recipients have prolonged and intensive caregiving burden, which has been shown to negatively impact QOL, physical well-being, and mood . Even prior to the procedure, caregivers experience immense anticipatory psychological distress . During HSCT, caregiver distress remains elevated as their loved ones experience treatment toxicities, physical and psychological symptoms, and the prolonged hospitalization and prognostic uncertainty . Palliative care integration and next steps PC integration into HSCT care is an active area of study. Several randomized clinical trials examined the feasibility and efficacy of PC-HSCT integration. A 2016 trial showed that specialty PC services improve psychological well-being and reduce symptom burdens during HSCT [ ••]. Outcomes from the same HSCT cohort six months after transplantation showed longitudinal benefits of PC on QOL, physical symptoms, anxiety, depression, and post-traumatic stress symptoms [ ••, ]. There was also an observed benefit in caregiver QOL and psychosocial outcomes, which prompted a subsequent unblinded, randomized trial conducted in 2020 examining a psychological intervention for caregivers of HSCT patients . A multisite randomized clinical trial of integrated specialist palliative care during the initial transplant hospitalization is ongoing (NCT# NCT03641378). PC integration at the index hospitalization for HSCT is beneficial and necessary. Future work is needed to improve psychological outcomes in patients who undergo HSCT and their caregivers. While recent small studies have examined various stress management interventions and treatment modalities, we must develop and implement PC interventions traversing the continuum of HSCT care which promote coping, improve QOL, reduce symptom burdens, and alleviate distress in HSCT patients and families . Hematopoietic stem cell transplantation (HSCT) is an intensive and potentially curative treatment for many HM. Between 1957 and 2019, there have been more than 1.5 million HSCT procedures performed worldwide . HSCT requires preparative chemotherapy, which is typically delivered during an often prolonged and intensive index hospitalization. Many patients who undergo the procedure, especially those receiving allogeneic transplants, develop complications including graft-versus-host-disease (GVHD) . Autologous HSCT is less risky and poses no risk of GVHD, but still requires high-dose chemotherapy prior to stem cell rescue. Of the diseases specifically discussed in this article, HSCT is commonly performed for patients with AML, ALL, aggressive lymphomas, MM, or MDS. Autologous HSCT is mostly performed in MM and NHL. We have opted to discuss HSCT separately from these other diseases because these patients have considerable symptoms and specific PC needs. Furthermore, PC integration in stem cell transplant care is an active area of study. Patients undergoing HSCT have unmet PC needs . HSCT is associated with low health-related QOL and high physical and psychological symptom burden . Physical symptoms are common and sometimes debilitating, perhaps comparable to patients with acute leukemias undergoing intensive high-dose chemotherapies. Psychologically, patients who undergo HSCT are highly likely to develop post-traumatic stress symptoms due to their treatment experience . The prolonged, socially isolating hospitalizations are associated with decreased patient-reported QOL, elevated levels of anxiety, and depressive symptoms including pronounced anhedonia . Thirty-seven percent (37%) of HSCT patients meet criteria for clinically significant depressive symptoms the week after transplant . Patients describe feeling trapped, fearful, discouraged, and powerless . Psychological stressors have been linked to higher risks of GVHD and decreased overall survival . Social isolation has only been intensified by the COVID-19 pandemic . We are just beginning to understand the effect of HSCT on patients’ families and caregivers. Caregivers of HSCT recipients have prolonged and intensive caregiving burden, which has been shown to negatively impact QOL, physical well-being, and mood . Even prior to the procedure, caregivers experience immense anticipatory psychological distress . During HSCT, caregiver distress remains elevated as their loved ones experience treatment toxicities, physical and psychological symptoms, and the prolonged hospitalization and prognostic uncertainty . PC integration into HSCT care is an active area of study. Several randomized clinical trials examined the feasibility and efficacy of PC-HSCT integration. A 2016 trial showed that specialty PC services improve psychological well-being and reduce symptom burdens during HSCT [ ••]. Outcomes from the same HSCT cohort six months after transplantation showed longitudinal benefits of PC on QOL, physical symptoms, anxiety, depression, and post-traumatic stress symptoms [ ••, ]. There was also an observed benefit in caregiver QOL and psychosocial outcomes, which prompted a subsequent unblinded, randomized trial conducted in 2020 examining a psychological intervention for caregivers of HSCT patients . A multisite randomized clinical trial of integrated specialist palliative care during the initial transplant hospitalization is ongoing (NCT# NCT03641378). PC integration at the index hospitalization for HSCT is beneficial and necessary. Future work is needed to improve psychological outcomes in patients who undergo HSCT and their caregivers. While recent small studies have examined various stress management interventions and treatment modalities, we must develop and implement PC interventions traversing the continuum of HSCT care which promote coping, improve QOL, reduce symptom burdens, and alleviate distress in HSCT patients and families . Patients with HM undergo intensive and often chronic treatments. They experience prolonged hospitalizations, undergo invasive procedures, and endure toxicities with long-lasting physical and psychological impact. Patients with HM and their families are perhaps the most psychologically distressed of all patients with cancer. Despite the high burden of unmet palliative-related needs, patients with blood cancers are substantially less likely to access PC than are patients with solid tumors. The evidence is clear that the way forward includes standard-of-care PC integration into routine hematologic malignancy care to improve patient and caregiver outcomes, but this may not be required or helpful for all patients and situations. More research is needed to inform the highest need populations and the highest impact interventions. As the PC needs for patients with blood cancer vary significantly by disease, a disease-specific PC integration strategy is needed, allowing for serious illness care interventions to be individualized to the specific needs of each patient and situation. As we have outlined throughout this article and summarized in Table , we are beginning to see the development of a robust evidence base for the integration of PC into standard practice AML, MM, and HSCT care. On the other hand, evidence has lagged in other hematologic malignancy conditions such as lymphoma, chronic leukemias, and MDS/MPN, despite high symptom burden, psychological distress, and poor QOL among these patients and their families. High-quality randomized clinical trials are needed for these specific patient populations to build upon the existing evidence and guide us forward in the care of these patients with serious illness.
Including Insonation in Undergraduate Medical School Curriculum
47ee1901-181b-4981-8f87-22218a7339df
6857521
Physiology[mh]
More than a century after the publication of the Flexner Report, curricular reform in medical schools remains a topic of much debate. Crafting a balance between rigorous scientific education and patient-centered care has been a major challenge for schools, and although there have been many reports and guidelines by national organizations there are still numerous curricular models employed in medical schools throughout the United States . One major challenge is the continuous re-appraisal of educational content and techniques in the face of ever-changing technology. Historically, ultrasound education has been employed during medical school curricula by using large scanners located in imaging laboratories within Radiology, Cardiology, Vascular or Obstetrics departments. With the advent of hand-held scanners it has become possible to scan patients in real time at the point of care, and many specialties such as Emergency Medicine, Surgery, Critical Care plus others have begun to embrace the technology and change their practices . Thus a technology innovation has enabled diverse new groups of users to perform focused sonographic assessment. Addition of ultrasound to four important components of physical examination (i.e. inspection, palpation, percussion, auscultation) has been proposed and has been referred to as INSONATION . Point of care insonation is being viewed as a disruptive innovation, changing the paradigm of consultative imaging by specialists to one where imaging may be performed at the bedside by the clinicians directly responsible for the patient care. A sample of the wide variety of applications for point of care insonation is presented in Table . Nearly every specialty has adopted some use of point of care ultrasound to aid diagnosis or guide procedures. While some procedures may be specialty-specific, such as transcranial Doppler or ultrasound-assisted brachytherapy, other indications span through many specialties. Insonation might also be used by non-physicians, such as ultrasound-guided venous access or bladder volume assessments. As clinical practice changes, residency program training requirements follow suit. For example, diagnostic and procedural ultrasound are part of the Model of the Clinical Practice of Emergency Medicine, and the Accreditation Council for Graduate Medical Education (ACGME) explicitly lists ultrasound among the training guidelines for Emergency Medicine, Obstetrics and Gynecology, Ophthalmology, Physical Medicine and Rehabilitation, Psychiatry and Neurology, Radiology and Urology . Therefore, it can be argued that insonation should be introduced in the medical school curriculum as a competency unto itself, in order to better prepare physicians for the skills required in their future careers. A study of first-year medical students instructed in insonation demonstrated they were able to detect pathology in 75% of patients with known cardiac disease, where board-certified cardiologists using stethoscopes could detect 49% . Similarly, internal medicine residents employing insonation were able to improve their diagnostic assessment of left ventricle function, valve disease, and left ventricle hypertrophy using ultrasound. Their assessments compared favorably to studies performed by level III echocardiographers, with average sensitivities of 93% and specificities of 99% for major pathology . Insonation during physical examination by medical students and junior residents were found to increase diagnostic accuracy for systolic dysfunction when compared to history and physical examination . The report of the AAMC (Association of American Medical Colleges) and the Howard Hughes Medical Institute (HHMI) committee, Scientific Foundations for Future Physicians , stated, “the medical school curriculum should be integrated across disciplines and repeatedly emphasize the importance and relevance of the sciences to medicine” . Understanding of the basic principles of ultrasound is listed in that report as a specific example of a competency to be expected of students entering medical school. But in broader terms ultrasound can layer an understanding of basic scientific principles (anatomy, physiology, etc.) onto clinical scenarios throughout medical training. This layered curriculum has already been implemented in many schools, with integration of ultrasound through pre-clinical and clinical courses. This can help bridge the gap between basic science and clinical care by demonstrating, in real-time, the living anatomy and physiology described in books. It can be used as a “virtual scalpel” to demonstrate anatomy on live subjects without harm. This allows for a three-dimensional assessment of anatomic structures, and the ability to visualize moving structures (such as the heart) in ways not possible with traditional cadaver dissection. Students learning physical examination can use ultrasound to enhance their understating of surface anatomy, organ size and location. There are several factors which make insonation integration in the curriculum challenging. First, access to hand-held machines is often limited. At most institutions, the larger ultrasound equipment is in continuous clinical use and taking the machines out of the clinical environment can be difficult. In larger medical centers there are enough machines that taking a small number away for a short educational session is feasible. Other institutions have dedicated training machines in use in educational areas or simulation centers. Others have a relationship with industry allowing for educational use of ultrasound equipment for training and research purposes. When large numbers of machines are available it is possible to have large sessions for many medical students simultaneously. Where these options are not feasible, it may be possible to use machines off hours when they are not being used for clinical cases. Smaller sessions with fewer students would also limit the number of machines needed for each session. Finding time in the curriculum can be difficult, especially when insonation is viewed as an extra skill to add in. Many programs have set up pilot projects where a set amount of time is dedicated to insonation training and practice, or a portion of the existing curriculum is adapted to incorporate insonation as an adjunctive teaching modality. In more progressive and comprehensive curriculum, insonation has been incorporated at the highest level of curricular planning, so the incorporation at multiple levels of medical education has been thought out at a global level. There is often variability in faculty’s comfort level in teaching insonation, and therefore finding instructors and mentors for the student experience is difficult. While there may be some faculty with a range of ultrasound knowledge sufficient to cover the curriculum requirements, it may be necessary to incorporate faculty development into the curriculum as well. Faculty development in skills as well as teaching styles and learning objectives is an important component of any course, but it is especially important in education given the wide range of comfort levels in using and instructing others in the technology. Finally, there are some institutions where political pressure may form obstacles to the medical student curriculum in insonation. If some departments feel ownership over the technology there may be reluctance to have a multi-specialty approach towards teaching the curriculum, or the content may be skewed towards a specialty-specific rather than global approach. Educators may even encounter opposition to the concept of an insonation curriculum for medical students if physicians take the stance that insonation is a technology which is not meant for use by many specialties and only graduate level specialized training is appropriate. This political issue is complex, and solutions include open multidisciplinary discussions at the curriculum committee and Dean’s office level to ensure a fair and balanced approach to undergraduate medical education. It is often helpful as well to invoke analogous curricular components which have been used successfully for years. For example, it is quite standard to teach a medical student destined to become a psychiatrist how to use an ophthalmoscope, and an understanding of anatomy on CT scans is commonly expected of Gross Anatomy students. These practices generally meet little resistance from (and pose no threat to) ophthalmologists and radiologists. Despite major curriculum changes to the preclinical years at many medical schools, the general focus of the first year remains building an understanding of normal anatomy and physiology. To this end Gross Anatomy remains a mainstay of the curriculum, and many different modalities have been described to enhance students’ educational experience. The term “living anatomy” was coined in 1986 as a means of describing anatomy assessments of joint movement, organ examination, etc. by medical students on themselves and each other . Since then the term has been applied to ultrasound assessments of live anatomy as well. The first description of a Gross Anatomy curriculum which included ultrasound was in 1996 in Hannover Medical School in Germany. Since then many other institutions have incorporated this technique , and insonation can become a disruptive training strategy. In addition, several institutions in the United States have crafted longitudinal curricula in insonation which begin in the first year and progress throughout the duration of the medical curriculum. In 2006, Wayne State University School of Medicine began a longitudinal curriculum which began with insonation for normal anatomy and basic principles of insonation for procedural skills . The curriculum included hands-on sessions, didactics, clinical correlates, multimedia computer-based content, and faculty mentoring. In the same year the integrated ultrasound curriculum (dubbed iUSC) at the University of South Carolina School of Medicine was begun . The first-year component of this four-year curriculum consisted of ultrasound laboratory sessions and web-based learning (modules online at www.susme.org ) as part of the Gross Anatomy curriculum. In addition, the physiology course was augmented by insonation for cardiovascular hemodynamics (cardiac and vascular ultrasound including Doppler assessment). On the other hand, a multi-modal approach was also used to integrate ultrasound into the first-year medical curriculum at University of California, Irvine . Web-based lectures modules (deployed via iTunes at the UCIMC Ultrasound Education channel), peer instruction and standardized formative evaluations were used to teach students skills in insonation with image acquisition and interpretation. At the Icahn School of Medicine at Mount Sinai, sessions on insonation for cardiac and vascular systems, and common bedside procedures have been incorporated into the Gross Anatomy Course since 2006 . In addition, the physical examination course is augmented by insonation sessions for the assessment of the heart, gallbladder, aorta, and thorax . Online tutorials and videos (available at www.SinaiEM.us ) were incorporated into the course structure, and dedicated faculty development sessions were used to enhance faculty skills. Students at the Ohio State University College of Medicine assess ultrasound anatomy as an adjunct to each body system covered by the Gross Anatomy curriculum. Students may take part in supplemental coursework highlighting ultrasound assessment of hypotension as well . At most schools the second year curriculum focuses on pathology and increasing comfort with physical examination techniques. Several institutions have integrated insonation into this portion of the learning experience as well. At the University of South Carolina School of Medicine ultrasound is incorporated into the physical diagnosis, problem-based learning, and pathophysiology courses. Insonation of organ systems augments physical examination skills, and case-based scenarios require students to incorporate multi-system assessments such as a hypotension evaluation . The Ohio State program continues with a two-week hands-on clinical skill review course, including sessions on ultrasound-guided procedures. An additional elective is available which covers organ assessments in more depth . At Wayne State and the Icahn School of Medicine at Mount Sinai, insonation is incorporated into the physical examination course . Standardized patients are scanned, and image-based case scenarios build on the concepts taught during the physical examination course. In most schools the third and fourth years are devoted to clinical rotations, and the bulk of students’ time is spent in patient care activities. The curriculum assumes increased comfort with normal and abnormal anatomy, and an increased focus on clinical decision-making. Many medical students encounter ultrasound use in their specialty rotations, and dedicated electives focusing on ultrasound have existed within Radiology, Maternal/Fetal Medicine, Cardiology, and Emergency Medicine for years. Five out of six required third-year clerkships at the University of South Carolina School of Medicine incorporate insonation instruction and an insonation-specific competency assessment . During the fourth year, electives in Emergency Medicine insonation, Radiology, and a two-day insonation “Capstone” course are available to interested students. At Ohio State students complete an adjunct course in insonation image interpretation and indications as part of their core clinical rotations in third year . In addition, fourth-year students undergo more advanced insonation training during their required Emergency Medicine rotation. Interested students may also undertake an advanced ultrasound elective which includes didactic and hands-on sessions, journal club, and participation in Emergency Department or Intensive Care Unit Insonation rounds . The University of California, Irvine established the first fourth-year medical student emergency insonation elective in 2002 . Each two- or four-week rotation includes didactic sessions, image interpretation review, and performance of bedside insonation in the Emergency Department. This format has become common to most Emergency Department-based insonation rotations for students, and the number of programs offering Emergency insonation rotations has grown dramatically in the last decade. Most of the nearly 90 fellowship programs in Emergency insonation offer rotations for medical students. Based on the experiences of many medical schools, a sample longitudinal curriculum is included below (Table ). Educators may consider more or fewer of these components based on resources. Since the 1990s there has been a rapid proliferation of online learning modules used in medical education. A recent meta-analysis of over 200 published studies on internet-based instruction found a positive effect for educational outcomes such as knowledge, skills, behaviors and effects on patient care . Given the tight time constraints on medical school curricula in general, many educators have sought to augment traditional classroom time with educational materials which students can access on their own schedule. Given the large amount of content knowledge necessary in insonation, it is not surprising that all medical schools currently training students in insonation incorporate internet-based content delivery to some extent. The variety of multimedia options used by the different programs reflects the varied needs of each group of students and technology options available. A combination of online tutorials, interactive assessment tools, cases, image archives, and other modalities have been used. The science of asynchronous learning is evolving as adult education theory itself evolves. A recent study investigated which components of internet-based learning were most effective and found interactivity, practice exercises, repetition, and feedback improve learning outcomes . Learner satisfaction was improved with interactivity, online discussion, and audio. In addition to the online modules previously listed, other examples of using interactive online technologies in ultrasound for medical education include the sonographic digital portfolio of saved cases at Ohio State , and the use of social media. At Ohio State, ultrasound educators used Twitter to deploy a curriculum of high-yield insonation concepts via “push technology” to followers on the account . Incorporation of insonation in medical education is growing rapidly. In 2011, the University of South Carolina hosted the First World Congress on Ultrasound in Medical Education . Educators and learners from across the globe gathered here to present evidence for educational strategies, collaborate, and engage in hands-on education sessions. Other efforts to increase interaction between learners and teachers include the peer mentoring program and The Ultrasound Challenge at Ohio State . The disruptive innovation of portable ultrasound and hence insonation has led to a revolution in medical imaging, empowering nontraditional sonographers outside of Radiology, Cardiology, or Obstetrics to help their patients in real time. As use spreads throughout most medical specialties, machine costs decrease, and more specialty societies create guidelines and training pathways in ultrasound use, medical students will be increasingly exposed to this technology throughout their careers. Moreover, they will be increasingly expected to understand its use when they graduate. We may soon expect many types of clinicians to be imagers as well. While traditional imaging specialties will still retain a high level of proficiency and have access to the most advanced imaging equipment, increasingly other clinicians will use focused examinations to gain immediate and management-changing information about their patients. Thus the novelty of medical student education in sonography will wane, replaced instead by ubiquity.
Myofibroblastoma in the Liver: A Case Report and Review of Literature
0e6e18c0-3c93-4bd7-90f0-2aa40c98fc05
10616985
Anatomy[mh]
Myofibroblastoma is a rare benign spindle cell tumor with fibroblastic and myofibroblastic differentiation, and usually occurs in the breast. Most commonly, it presents as a slow-growing painless breast mass, which on imaging, appears as a well-circumscribed, lobulated mass. Infrequently, it occurs outside of the breast along the embryonic milk line and is also known as mammary-type myofibroblastoma. Myofibroblastoma was first described in 2001 in a series of 9 cases. The most common locations are the inguinal region, perineum, scrotum, and vulva. Myofibroblastoma is exceedingly rare in the liver, and currently, only one such case has been reported in the English literature. When occurring in the liver, myofibroblastoma may pose diagnostic challenges due to its non-specific clinical and radiological features. On histology, it overlaps with other bland spindle cell lesions. Herein we describe a case of myofibroblastoma in the liver of a 52-year-old woman, with detailed histological and immunohistochemical analyses. A 52-year-old woman presented with symptoms of urinary tract infection and subsequently underwent computed tomography (CT) urogram examination to rule out anatomic abnormalities. Incidentally, she was found to have a 5.4 cm lesion in the right liver . Subsequent magnetic resonance imaging (MRI) of the abdomen demonstrated a mass with avid uniform arterial phase enhancement without washout on delayed portal venous phase. The gallbladder, bile ducts, adjacent hepatic parenchyma, and hepatic vasculature were unremarkable. The overall imaging findings were indeterminate, as imaging features were not characteristic of benign hepatocellular lesions (eg, focal nodular hyperplasia or adenoma). Percutaneous biopsy was thus performed under ultrasound imaging guidance. Histology demonstrated a hypocellular spindle cell neoplasm with bland, oval to spindle cells embedded in the hyalinized collagen bundles . There were no mitoses or necrosis. A broad panel of immunohistochemical stains showed a unique combination of positive estrogen receptor (ER), CD34 (focal), and muscle markers (SMA and caldesmon) consistent with myofibroblastoma. She subsequently underwent partial hepatectomy. A gross examination of the liver revealed a solitary, white-tan, firm, well-circumscribed, homogeneous mass measuring 6.2 × 4.9 × 3.8 cm. The cut surface of the mass was smooth, firm, and whorled, with a focal area of hemorrhage consistent with the prior biopsy site . The mass was 0.1 cm away from the inked resection margin. Hematoxylin and eosin-stained (H&E) sections from the liver tumor demonstrated a distinct well-demarcated spindle cell proliferation with uniform oval to spindle cells arranged haphazardly and in short fascicles in a background of collagenous stroma , similar to prior biopsy. High risk features such as cytological atypia, mitotic activity, or necrosis were absent. The hepatic parenchymal resection margin was negative for tumor, and the background liver parenchyma did not show any significant inflammation, fibrosis, or steatosis. Additional immunohistochemical stains were performed on the resected tumor. The tumor cells were positive for progesterone receptor (PR), and androgen receptor (AR). The tumor cells were negative for KIT, DOG1, desmin, EMA, S100, and STAT6. The immunohistochemical staining for RB1 was partially lost . A fluorescent in-situ hybridization (FISH) assay using Vysis single-color probe kit was performed to detect 13q14 ( RB1 ) locus alteration. Among 151 tumor cells, there was monoploid RB1 signal in 49% of cells consistent with monosomy or heterozygous deletion, and complete loss of RB1 signal in 41% of tumor cells indicative of homozygous deletion . These findings confirmed a diagnosis of myofibroblastoma. The patient was well nine months after surgery, except for minor postsurgical discomfort. There was no residual or recurrent disease. Myofibroblast is modulated fibroblast with the ability to contract and hence functionally resembles smooth muscle. In 1987 Wargotz et al coined the term myofibroblastoma for a distinct benign stromal tumor of the breast in which the tumor cells showed immunophenotypic and ultrastructural features of myofibroblasts. It usually affects older men and postmenopausal women. A similar mesenchymal tumor resembling mammary myofibroblastoma occurs at a location other than the breast is known as extramammary or mammary-type myofibroblastoma. They usually occur along the milk line and primarily affect adults ; however, there have been occasional cases reported in children. The histogenesis of extramammary myofibroblastoma remains enigmatic. The proposed theory is that these tumors originate from tissue along the embryonic milk line that extends from the axilla to the medial groin. The alternative theory proposed for the lesions outside the embryonic milk line is the presence of ectopic breast tissue. In the largest series of 143 extramammary myofibroblastomas, the tumors were distributed in the inguinal (45%), chest wall/axilla (5%), trunk (12%), lower extremity (13%), upper extremity (1%), and intraabdominal/retroperitoneal organs (10%). The exact breakdown of intraabdominal tumors with respect to visceral involvement was unavailable. The mean age in the series was 54 years, with a male predominance (66%) and a mean tumor size of 6.6 cm. Most patients were asymptomatic, with few presenting with a painful mass. Other reported unusual locations outside of the milk line included the popliteal fossa, liver, orbits, , head and neck, thigh, buttock, prostate, para testicular region, meningeal, seminal vesicle, scrotum, and perianal area. Wei et al described a collision tumor in the vulva composed of extramammary myofibroblastoma and eccrine adenocarcinoma. Most patients had a good prognosis despite positive surgical margins, except for one case that recurred 20 years later. Among the 160 cases of myofibroblastoma reported in the literature in various locations, there was only one instance in the liver. Radiologic features of these rare tumors are not well characterized; nonetheless, MRI typically shows a well-circumscribed mass with an intralesional fat component. Histologically, myofibroblastomas are well-circumscribed mesenchymal tumors composed of bland spindle cells arranged in a fascicular pattern in a background of collagenous stroma. , Individual cells have short elongated nuclei and eosinophilic cytoplasm with indistinct cell borders. The tumor may have a varied proportion of intralesional fat. Less common morphologic features such as nuclear atypia, epithelioid tumor cell morphology, and neurilemmoma-type nuclear palisading were also described. Recognizing the histological diversity, the World Health Organization (WHO) classification of breast tumors describes five variants, ie, lipomatous, myxoid, fibrous/collagenized, epithelioid/deciduoid, and palisading/Schwannian-like myofibroblastomas. Our case was fibrous predominant with no intralesional adipocyte component. Immunohistochemically and ultrastructurally, myofibroblastomas prove to be of smooth muscle differentiation. Lee et al reported that these lesions are diffusely positive for desmin, compared to focal positivity as originally reported by Wargotz et al. , They are variably positive for CD34, AR (androgen receptor), and ER. Actin positivity may be seen in one-third of the tumors. , Howitt et al found CD34 positivity in 89% and desmin positivity in 91% of the tumors; however, both markers were negative in 4 cases (3%). In addition, RB1 expression was lost in 92% of tumors. In our case, the tumor was positive for SMA, caldesmon and ER, focally positive for CD34, and negative for desmin, with partial loss of RB1 expression. Pathologists must be aware of the variable expression of immunohistochemical markers in order to arrive at a correct diagnosis. As shown in our case, RB1 FISH assay is available to confirm the diagnosis if the immunohistochemical staining pattern is equivocal or shows partial loss of expression. The differential diagnosis includes benign tumors such as spindle cell lipoma, cellular angiofibroma, angiomyofibroblastoma, solitary fibrous tumor, perineurioma, and malignant lesions such as gastrointestinal stromal tumor (GIST) and low-grade malignant nerve sheath tumor. Spindle cell lipoma is a well-circumscribed, thinly encapsulated, lobulated mass common in the head and neck area. This lesion is composed of an admixture of mature adipocytes and undifferentiated spindle cells with short stubby nuclei and pale, poorly defined cytoplasm with rare mitoses. They are set in a variably fibromyxoid stroma notable for the presence of brightly eosinophilic, hyaline collagen fibers and numerous mast cells. Myofibroblastoma contains less fat and more prominent collagen-rich stroma, and it is usually positive for desmin compared to spindle cell lipoma (<2% cases are positive). Both are genetically related neoplasms and share partial loss of genetic material from the chromosome 13q14 region. In the inguinal region, cellular angiofibromas and angiomyofibroblastomas also enter into the differential diagnoses for extramammary myofibroblastoma. Angiomyofibroblastoma, more commonly seen in females, is well-circumscribed with prominent small vessels and desmin-positive perivascular cells. In postmenopausal women, they show hyalinized stroma and a more spindled appearance resembling myofibroblastoma. Cellular angiofibromas in females are small and circumscribed, while in males these tumors are larger and less circumscribed. Histologically, they exhibit relatively uniform spindle cells with scant eosinophilic cytoplasm, indistinct cell border, and short, blunt to wavy nuclei arranged in short fascicles associated with wispy collagen bundles. The striking hyalinized vessels are distinct for these lesions and do not occur in myofibroblastoma. Immunohistochemically cellular angiofibromas are positive for CD34 but commonly negative for desmin and SMA, which helps to differentiate from myofibroblastoma. The tumor suppressor gene RB1 is located on chromosome 13q14, and deletion of this region has been reported in genetically related tumors, including spindle cell lipoma, myofibroblastoma, and cellular angiofibromas. , – Magro et al suggested the term “benign stromal/mesenchymal tumors with 13q14 deletion” as a unifying concept, as all three share the same genetic alteration and a subset of spindle cell lipomas and cellular angiofibroma showed histologic similarity to myofibroblastoma. Solitary fibrous tumor (SFT) is another differential diagnosis to be considered while dealing with a bland spindle cell lesion in the liver. It comprises short stubby spindle cells in a patternless pattern with intervening hyalinized collagen, characteristic branching hemangiopericytoma-like vascular pattern, and perivascular hyalinization. SFTs occurring outside of the pleura rarely exhibit mitoses and necrosis. The tumor cells are typically positive for CD34 and STAT6 and negative for S100, actin, desmin, and keratin. A subset of tumors show nuclear positivity for beta-catenin, and positivity for desmin is infrequent and focal. Gastrointestinal stromal tumor (GIST) is a common mesenchymal tumor in the gastrointestinal tract which may involve the liver. It may contain variable amount of collagen-like stroma; however, GIST is typically positive for KIT and DOG1. Malignant peripheral nerve sheath tumors (MPNST) consists of spindle cells with tapering or buckled nuclei and show perivascular whirling appearance. Usually desmin is negative in MPNSTs except for those with rhabdomyogenic differentiation (Triton tumor), and S100 is positive in 50% of MPNSTs. Loss of H3K27me3 is a typical finding in MPNST which can help establish the diagnosis. In summary, myofibroblastoma in the liver is a rare tumor composed of cytologically bland spindle cells in a hyalinized collagenous stroma. The spindle cells show variable expression of CD34 and desmin. Usually, they are positive for SMA and ER in addition to the loss of RB1 as demonstrated by immunohistochemistry and FISH. With its histological heterogeneity and characteristic immunohistochemical profile, awareness of this uncommon entity is essential to arrive at the correct diagnosis, especially in small biopsies.
I primi passi dell’igiene industriale e della tossicologia occupazionale presso la Clinica del Lavoro di Milano sotto la guida di Luigi Devoto
9039a1d5-35cf-42f1-a780-8fc2fe292828
7809941
Preventive Medicine[mh]
Lo sviluppo industriale che caratterizzò la città di Milano e l’intera Lombardia nell’Ottocento ebbe un importante impulso a seguito dell’Unità di Italia e della successiva introduzione dell’energia elettrica all’interno delle città lombarde. Nel 1883 il “Comitato promotore per le applicazioni dell’elettricità” – divenuto poi “Società generale italiana di elettricità-Sistema Edison”, sotto la direzione dell’ingegnere Giuseppe Colombo (1836-1921) del Politecnico di Milano inaugurò nel capoluogo lombardo, in via Santa Radegonda, vicino al Duomo, la prima centrale elettrica dell’Europa continentale, seconda nel mondo solo a quella di New York del 1882 . In seguito, numerose altre centrali elettriche vennero aperte all’interno delle città lombarde, in corrispondenza di alcuni corsi d’acqua, tra cui il principale fu l’Adda. Le disponibilità di energia con cui alimentare i macchinari favorì la crescita di attività industriali; quelle che si svilupparono maggiormente furono le industrie tessili, le industrie metalmeccaniche e la tipografia. Negli stessi anni vennero realizzate imponenti opere ingegneristiche quali i trafori alpini per il collegamento ferroviario tra l’Italia e i paesi confinanti Francia e Svizzera. Queste attività, associate al rapido sviluppo industriale, richiamarono manodopera fino ad allora prevalentemente impiegata in agricoltura. La mancanza di formazione, di una cultura della prevenzione e di un adeguato impianto legislativo a tutela della salute e della sicurezza dei lavoratori provocarono numerose vittime, sia in termini di infortuni sul lavoro sia sotto forma di malattie professionali. A titolo di esempio, i lavori per il traforo del San Gottardo, che collega Italia e Svizzera e che fu inaugurato nel 1882, causarono circa 200 vittime per infortuni provocati dai crolli e dalle esplosioni in galleria e per il diffondersi di una patologia infettiva provocata dall’ ancylostoma duodenale , un parassita che si riproduceva e sviluppava nel clima caldo umido presente nelle gallerie e a causa delle scadenti condizioni igieniche dell’ambiente di lavoro . A cavallo tra la fine dell’Ottocento e l’inizio del Novecento, Milano divenne il principale centro economico della penisola italiana, rappresentando una delle città ai vertici del triangolo industriale insieme a Torino e Genova. Nel contempo, Milano fu la sede di alcune delle prime iniziative a tutela della salute dei lavoratori. Ad esempio, proprio in quella città, nel 1884, aprì i propri uffici la “Cassa Nazionale di Assicurazione per gli Infortuni degli operai sul Lavoro” (CNAIL), nucleo originario dell’INAIL . Successivamente, nel 1906, in occasione dell’Esposizione Internazionale, tenutasi nel capoluogo lombardo in concomitanza con l’inaugurazione del traforo del Sempione, venne organizzato il primo “Congresso Internazionale per le Malattie del Lavoro” durante il quale venne fondata la Commissione internazionale permanente sulle malattie da lavoro, nucleo originario dell’ICOH. In questo particolare contesto socioculturale ed economico, venne istituita a Milano la Clinica del Lavoro, primo istituto al mondo dedicato alla ricerca, diagnosi e cura delle malattie professionali, la cui direzione venne affidata al clinico universitario Luigi Devoto, principale promotore della nascita di questo istituto. Luigi Devoto nacque a Borzonasca, vicino a Chiavari, nel 1864. Si laureò in medicina a Genova nel 1888, iniziando la propria carriera universitaria come assistente presso l’Istituto di Anatomia Patologica del capoluogo ligure. Dopo un breve periodo di ricerca a Praga, ritornò a Genova come assistente presso l’Istituto di Clinica Medica, guidato da Edoardo Maragliano (1849-1940) . In questi anni compì alcuni studi sulla tubercolosi negli operatori sanitari, che lo avvinarono per la prima volta al tema della tutela della salute dei lavoratori . Libero docente in Patologia Medica nel 1897, due anni più tardi vinse la cattedra presso l’Università di Pavia. Interessato a studiare gli effetti delle diseguaglianze sociali sulla salute, tema ereditato dal suo maestro Maragliano, giunto a Pavia Devoto iniziò a dedicarsi alle malattie delle diverse classi di lavoratori. A partire dal 1901, prese parte ai lavori della Reale Commissione per lo studio delle malattie professionali, voluta dall’allora ministro dell’agricoltura, industria e commercio, il medico Guido Baccelli (1830 - 1916) . I suoi interessi nei confronti delle patologie sociali lo portano a organizzare nel suo ateneo un corso monografico sulle malattie da lavoro, dove evidenziò come determinate attività lavorative, associate a misere condizioni di vita, concorrevano all’insorgere di specifiche patologie. Per approfondire questo tema, nel 1901 fondò, insieme all’allievo Carlo Moreschi (1876-1921), una rivista scientifica intitolata “Il lavoro: rivista di fisiologia, clinica, igiene e assistenza sociale del lavoro” . Nel 1925 la rivista prenderà il nome di “La Medicina del Lavoro” e continuerà a pubblicare i suoi numeri fino ad oggi, superando indenne anche il periodo della Seconda Guerra Mondiale e divenendo la più antica rivista al mondo sull’argomento ancora oggi in attività. Nei primi anni del 900 il suo impegno per il miglioramento delle condizioni di vita e di lavoro delle classi dei lavoratori lo portò a tenere alcuni cicli di conferenze nelle quali sostenne il concetto che non era il singolo lavoratore, bensì il lavoro ad essere malato e che quest’ultimo doveva essere curato per poter garantire migliori condizioni di salute per il lavoratore. Devoto intendeva la scienza come uno strumento di avanzamento della conoscenza e di risoluzione di problemi sociali. In particolare, in una conferenza tenuta a Brescia nel dicembre del 1901, Devoto dichiarò che “ […] occorre purificare il lavoro dalle sue spine e dalle sue macchie. Il soccorso illuminato della fisiologia e della patologia del lavoro è indispensabile. Occorre avere fede nella scienza. E colla luce della scienza si arriverà a fare un codice, emanazione delle leggi della biologia e della fisiologia, che costituirà un patto di alleanza tra governi, industriali e lavoratori” . Nel 1908 Devoto si trasferì a Milano, presso gli Istituti Clinici di Perfezionamento, per dirigere i lavori di costruzione della nascente Clinica delle malattie professionali . La sua attività a Milano si interruppe con l’avvento della Prima Guerra Mondiale, durante la quale il medico genovese venne scelto come consulente dal III corpo d’armata, dedicandosi alle malattie nelle truppe italiane e promuovendo, insieme ai suoi collaboratori, ricerche e studi per ridurre mortalità e morbilità dei soldati . Al termine della guerra, nel 1922 venne nominato membro effettivo del Regio Istituto Lombardo di Scienze e Lettere e, nel 1924, venne chiamato a ricoprire la prima cattedra di medicina del lavoro della nascente Università degli Studi di Milano. Presidente della “Società italiana di idroclimatologia, talassografia e terapia fisica” (1920-1931), nel 1929 venne eletto presidente anche della neofondata “Società Italiana di Medicina del Lavoro”, carica che mantenne fino alla morte . Nel 1934 venne nominato, su proposta di Benito Mussolini (1883-1945), senatore del Regno per i meriti in campo scientifico, sociale e organizzativo. I suoi rapporti con il regime fascista furono caratterizzati da reciproca stima e rispetto . Oltre ad intitolare alla “Carta del Lavoro” il convalescenziario di Salice (1931), in numerose occasioni Devoto mostrò il proprio favore ai provvedimenti presi dal governo di Mussolini in campo di assicurazione obbligatoria per le malattie del lavoro e di igiene industriale. Devoto si spense a Milano nel 1936, un anno dopo il suo ritiro dalla Clinica del Lavoro, istituto che aveva fondato e diretto per venticinque anni . Nella seduta del 20 novembre 1902, il consiglio comunale della città di Milano, presieduto dal sindaco Giuseppe Mussi (1836-1904) e di cui facevano parte illuminati esponenti socialisti, tra cui Luigi Mangiagalli (1850-1928), futuro direttore della Clinica Ostetrico-Ginecologica, che oggi porta il suo nome, deliberò l’istituzione di una Clinica delle Malattie professionali. Si decise che le spese di costruzione questo istituto venissero sostenute dal Comune stesso che identificò, per la sua costruzione, un lotto di terreno di sua proprietà, vicino alla nascente Clinica Ostetrico-Ginecologica. Come recitava una lapide, a firma del sindaco Mussi e collocata sulla facciata della Clinica, poi scomparsa a seguito dei bombardamenti del 1943, gli scopi della clinica erano: “studiare scientificamente le cause delle malattie professionali, diffondendone la conoscenza clinica tra i medici; ospitare a scopo diagnostico e terapeutico i lavoratori sospetti iniziati od inoltrati nelle malattie professionali; controllare periodicamente lo stato di salute degli operai addetti alle industrie in genere e ai lavori insalubri in modo speciale” . La Clinica era dunque destinata ad educare figure cliniche post-universitarie attraverso la cura di ammalati e la sperimentazione scientifica a scopi sociali, avvalendosi di laboratori di primo ordine che consentissero la applicazione del metodo sperimentale, come suprema garanzia del proprio operato. La frequentazione della Clinica consentiva il conseguimento di un diploma di perfezionamento. Nel 1905 il Parlamento italiano, con la legge 365 del 9 luglio, approvava la costituzione degli Istituti Clinici di Perfezionamento milanesi, comprendenti la “Clinica delle malattie professionali”, la cui prima pietra venne posta nel dicembre 1907. Le attività di ambulatorio specialistico iniziarono nel 1908, sotto la responsabilità di Luigi Carozzi (1875-1963), aiutato dall’assistente Mario Ciovini, usufruendo dei locali messi a disposizione dalla Poliambulanza di via Arena prima e della Clinica Ostetrica poi. Il 20 marzo 1910 venne inaugurata la Clinica delle Malattie Professionali . Il nuovo istituto si proponeva come una struttura di assistenza transregionale e internazionale: il suo regolamento prevedeva che avrebbe ospitato malati di ogni regione: “il territorio clinico è quello stesso del lavoro patogeno. Il fatto stesso che la Commissione internazionale delle malattie di lavoro tiene la sede nella clinica, indica che l’istituzione non ammette confini e vuole anzi essere al centro di larghi scambi internazionali”. La Clinica della Malattie professionali faceva parte di una rete di strutture sanitarie di proprietà del comune di Milano, chiamate Istituti Clinici di Perfezionamento. Informazioni dettagliate sulla costruzione e organizzazione della Clinica si possono trovare sul volume pubblicato dagli Istituti Clinici di Perfezionamento, relativo alle attività svoltesi dal 1906 al 1911 , come anche sulla relazione preparata da Devoto sul primo biennio di vita della Clinica . Questa originariamente prevedeva un edificio con tre livelli: il piano interrato, il piano rialzato e il primo piano; a seguito di una donazione della associazione degli industriali tessili il progetto iniziale venne ampliato con la costruzione di un secondo piano. L’edificio, alla sua inaugurazione, constava di 4 piani, per una superficie coperta di 1300 m 3 , dove erano presenti reparti per la degenza distribuiti su tre piani e comprendenti ciascuno due sale adiacenti con 10 letti l’una, dotate di servizi igienici comuni . Vi erano inoltre due stanzette che potevano ospitare pazienti singoli, dotate di servizi igienici separati. Ciascun reparto aveva un locale adiacente chiamato gabinetto per le prime analisi, dove i medici eseguivano semplici saggi biochimici, come la conta delle cellule del sangue . A quell’epoca questo era la prassi per gli esami, dato che la biochimica clinica era una disciplina che iniziava a muovere i primi passi. L’edificio era dotato di quattro laboratori: quello di fisica clinica, con il gabinetto röntgeniano, al piano interrato, quello di chimica, al primo piano, quello di batteriologia, di diagnostica e di terapia sperimentale e quello di istologia e microscopia clinica, al secondo piano. L’edificio comprendeva una biblioteca e un’aula magna . Nel 1911 il personale in servizio presso la clinica constava di un direttore (Luigi Devoto), due coadiutori (Domenico Cesa-Bianchi e Luigi Carozzi); due capo-servizio (Luigi Preti e Carlo Bezzola), un bromatologo (Nestore Monti); tre assistenti (Mario Ciovini, Pietro Boveri e Felice Perussia), quattro praticanti (Carlo Vallardi, Benedetto Agazzi, Luigi Pollini, Luigi Bellazzi) e un inserviente (Giuseppe Milanense). I praticanti non ricevevano stipendio, se non dopo un anno di diligente frequentazione, quando potevano essere promossi ad assistenti. L’organizzazione prevedeva che i laboratori fossero affidati a capi-servizio. Le sale cliniche erano ripartite all’inizio di ogni anno tra i medici, fossero o meno capi laboratorio. Lo scopo dei laboratori era di fornire i dati necessari a stabilire l’eziologia di molte forme morbose e la diagnosi delle malattie. Questi erano perfettamente attrezzati con le più moderne apparecchiature scientifiche, acquistate attraverso una donazione di 80.000 lire ricevuta dalla Cassa di Risparmio delle provincie lombarde; per la fornitura di reagenti chimici era stata invece ricevuta una donazione dalla Società Carlo Erba. Il laboratorio di chimica era affacciato sull’ingresso dell’edificio ed aveva numerose finestre che assicuravano una ottima illuminazione naturale ( A). Era composto da sei locali, di cui il più grande era munito di 3 grandi tavoli da lavoro da 4 posti completamente attrezzati di gas, acqua e di un reagentario per ogni posto. Questi potevano ospitare complessivamente 12 persone. In corrispondenza di tre finestre erano state applicate le cappe chimiche, di cui una adattata a bagnomaria con riscaldamento a vapore. In ogni cappa vi era gas, acqua, vapore ed energia elettrica. Ai muri erano posizionati scaffali per il reagentario. Un tavolo laterale serviva per le bilance ordinarie e come deposito della carta da filtro. Il laboratorio era completato da stanze più piccole destinate ad esami analitici speciali, ad indagini di fisica chimica, alle distillazioni, alle bilance, alle macchine, ai termostati; vi erano inoltre una camera oscura e locali di servizio per la vetreria. Nel piccolo atrio erano contenuti due guardaroba da 10 posti ciascuno per il deposito degli indumenti personali dei praticanti. Lo scopo del laboratorio di Chimica era di: Esaminare le sostanze prime maneggiate dagli operai e seguirne le modificazioni e trasformazioni successive; Valutare gli effetti delle sostanze tossiche presenti nell’ambiente di lavoro sulle funzioni dell’organismo umano; Indagare le condizioni di igiene dell’operaio: conoscere cosa mangia e beve, svelare i componenti normali ed anomali dell’aria che respira, in modo da dare fondamento razionale a molti stati che altrimenti riuscirebbero inesplicabili; Sviscerare, in collaborazione con branche affini quali la chimica fisiologica e patologica, il campo complesso delle funzioni dell’organismo, ostacolate o arrestate, rese esuberanti o deficienti da tutte quelle cause perturbatrici date dal lavoro e dall’industria. Nei primi anni della Clinica, Devoto aveva designato a capo del laboratorio il medico Luigi Preti, coadiuvato dal bromatologo pavese Nestore Monti (1878-1947) . Quest’ultimo si era laureato in Chimica e Farmacia a Pavia nel 1902 e nel 1911 aveva conseguito la libera docenza in Chimica bromatologica. Direttore del Laboratorio Provinciale di Pavia, Monti si distinse per alcuni studi di grande modernità per l’epoca nel campo della bromatologia e della tossicologia e, in particolare, collaborò con Devoto sul tema dei rapporti tra alimentazione e salute nelle diverse classi sociali. Il direttore della Clinica del Lavoro, infatti, pubblicò, a partire dal 1903 e fino al 1926, diversi contributi scientifici e dissertazioni in cui associava la comparsa delle malattie professionali con la malnutrizione e con le scorrette abitudini di vita . Oltre al laboratorio di chimica era presente all’interno della Clinica un laboratorio di fisica clinica, collocato al piano interrato, suddiviso in tre sezioni: il gabinetto röntgeniano per le radiografie a raggi X ( B), la sala di grafica e quella per l’analisi dei gas e del ricambio respiratorio. Il gabinetto röntgeniano, diretto da Felice Perussia (1885-1959), fresco reduce dal perfezionamento in radiologia a Vienna e Amburgo, era dotato dei più moderni e precisi apparati che permettevano l’esecuzione di ogni lavoro utile agli scopi della semeiotica e della diagnosi, come pure della terapia. Moderni e potenti generatori di energia elettrica facevano funzionare questi apparecchi. Nella sala grafica trovavano sede chimografi, ergografi e gli strumenti più perfezionati del ramo quali: cardiografi, arteriografi, sfingografi, tonosfingmografi, clonografi, pneumografi, dinamografi, cronometri, reflessometri, apparecchi destinati a registrare i movimenti, i tremori, esploratori dei muscoli, algesimetri, estesiometri. Nella sala per l’analisi dei gas erano collocati un apparecchio Zuntz, per la determinazione dei gas presenti nel ricambio respiratorio e pipette ad assorbimento per sostanze solide, liquide, burette ad esplosioni, per combustioni, tubi di Wollf, pompe a mercurio, carbacidometro, azometro, barometro, igrometri, psicrometri, termometri, spirometri, apparecchi portatili per la determinazione dei gas in un ambiente, ovvero numerosi strumenti utili per le indagini di igiene industriale. Oltre a questi due laboratori, sicuramente associabili ad attività di tossicologia e igiene industriale, è opportuno ricordare che il laboratorio di batteriologia-diagnostica e di terapia sperimentale ( C e D) era dotato di uno stabulario, dove potevano essere alloggiati animali di piccole e grandi dimensioni, che potevano essere utilizzati per testare la tossicità delle sostanze chimiche. Le informazioni principali sulle attività di ricerca condotte nei primi decenni di vita della Clinica ci vengono fornite da Luigi Devoto stesso, quando, in occasione del suo pensionamento, avvenuto nel 1935, riassunse le attività dei primi venticinque anni dell’istituto milanese . Erano numerose le patologie da agenti chimici studiate in quel periodo; tra gli avvelenamenti i più ricorrenti erano il saturnismo, il mercurialismo, il fosforismo, il benzolismo, l’anilismo, il solfocarbonismo; inoltre, erano molto studiate le patologie polmonari da polveri. Nella descrizione fatta da Devoto delle diverse linee di ricerca sviluppate in quegli anni è possibile individuare alcuni riferimenti alle attività riconducibili al laboratorio di chimica. Relativamente al saturnismo, Devoto citò gli studi di Ugo Bassi, che aveva indagato le aracnoiditi tossiche da piombo, dimostrando la presenza del piombo nei tessuti . A proposito dell’avvelenamento professionale da mercurio venne citato il contributo di Giovanni Battista Lo Faso, chimico e capo del laboratorio di chimica, che elaborò un metodo per la ricerca facile e sicura del mercurio nell’urina . Il metodo, dotato di alta sensibilità, si avvaleva dell’analisi spettroscopica: “Il Lo Faso usa un rotolino di fili di rame, pone il rotolino lavato e asciugato in un tubo Geissler-Plücker da lui modificato. Fa il vuoto con pompa ad acqua e illumina il tubo mediante scarica secondaria di un altro rocchetto di Rumkorf” . Devoto definì le polveri presenti negli ambienti di lavoro come sempre dannose in quanto potevano determinare, a poco a poco, alterazioni di diverso carattere a livello dell’apparato respiratorio. Inoltre, sottolineò che, se le polveri non fossero in grado di arrecare da sole danni, potevano costituirsi quali elementi cooperatori o associati di germi batterici o a questi potevano preparare il terreno per svilupparsi. In questo ambito, il chimico Lo Faso, studiò, in collaborazione con Boattini, la patogenesi della antracosi polmonare , mentre Luigi Pollini dimostrò la presenza di carbone nei polmoni dei bambini . Sempre nel laboratorio di chimica vennero sviluppati anche saggi biochimici che supportavano le attività diagnostiche. In particolare Lo Faso si dedicò allo studio della reazione del biureto per la determinazione di sieroproteine . La reazione era basata sulla presenza di ioni bivalenti di rame, che, in ambiente basico, formavano un complesso colorato violetto che si misurava con uno spettrofotometro a 520-550 nm. Dopo la morte di Devoto, nel 1936, le attività del laboratorio di chimica continuarono sotto la nuova direzione della Clinica, affidata a Luigi Preti (1936-1941), che aveva già lavorato nell’istituto ed era stato a capo del laboratorio, prima di trasferirsi all’Università di Parma. Negli anni della sua direzione, fino al 1941, anno in cui prematuramente scomparve, Preti proseguì negli studi sulla tossicità degli agenti chimici e, in particolare, sul saturnismo. Per quanto attiene al laboratorio di chimica, vennero pubblicati in quegli anni alcuni studi su metodi per la misura del piombo nel sangue, nell’urina, nella saliva . Nella sua relazione sui primi venticinque anni di attività della Clinica, Devoto riportò che il 12 settembre 1912, Bruno Lido Guastalla, figlio del grande patriota risorgimentale Enrico (1826-1903), assegnò 10.000 lire alla Clinica per istituire un “Museo di Igiene Professionale” in onore della memoria di suo fratello Gildo , che era recentemente scomparso . Mediante alcuni doni provenienti da Trieste, Dresda e Bruxelles, venne creato il museo che conteneva le riproduzioni al naturale di oltre cento malattie professionali. Non abbiamo ulteriori informazioni su questo museo, che aveva lo scopo divulgativo, rivolto principalmente alla popolazione, di fare comprendere i rischi legati alle principali patologie professionali. Le teche museali originali sono state conservate presso la sede centrale degli Istituti Clinici di Perfezionamento fino al 2006, anno in cui, a seguito di una riorganizzazione degli ospedali milanesi, la Clinica passò sotto la direzione della Fondazione IRCCS Ospedale Maggiore Policlinico, di cui ancora fa parte. In questo momento non è nota la collocazione delle teche. Nella parte conclusiva della pubblicazione “I venticinque anni di attività della Clinica del Lavoro di Milano”, Devoto dedicò un capitolo alle convenzioni della Clinica con le organizzazioni, sindacati e mutue di operai. In questo capitolo sottolineò come la medicina del lavoro, o meglio ancora, la medicina sociale, avesse bisogno, per poter funzionare al meglio, delle competenze integrate di numerose figure professionale, tra cui venivano individuate quelle del tossicologo e dell’igienista. Si riporta integralmente uno stralcio del testo: “[…] è indispensabile creare nelle metropoli il collegio post universitario scientifico-pratico del lavoro e in questo collegio devono intervenire con valide cooperazioni scientifiche e didattiche, esserne anzi elementi codirigenti, il professore di fisiologia, il professore di medicina legale, il tossicologo, l’igienista, e l’ispettore del lavoro, che rappresentano i lati fondamentali o prominenti della medicina del lavoro, nella sua missione scientifica e nei suoi obbiettivi pratici”. L’importanza data da Luigi Devoto al laboratorio di chimica è confermata anche dalla sua collocazione all’interno dell’edificio. Il laboratorio si trovava al primo piano della Clinica, accanto allo studio di Devoto stesso, e le sue finestre si aprivano direttamente sulla facciata principale. I locali erano molto grandi e luminosi ed erano riforniti delle apparecchiature più moderne dell’epoca. Questa posizione così di rilievo, al centro dell’istituto, è insolita rispetto a quella che si poteva osservare negli ospedali e istituti di ricerca italiani in quegli anni. Il laboratorio era, infatti, generalmente collocato nel piano sotterraneo degli edifici, in locali nascosti e spesso privi di luce diretta. Il suo ruolo veniva, infatti, considerato secondario rispetto a quello delle divisioni cliniche. Devoto scelse di capovolgere questo paradigma, dando al laboratorio la dignità che meritava e mettendolo al centro di tutte le attività cliniche e di ricerca. È importante osservare che la direzione del laboratorio venne affidata solo inizialmente ad un medico (Preti). Successivamente al trasferimento di quest’ultimo all’Università Parma, la responsabilità della struttura passò a un chimico (Lo Faso); in questo modo Devoto intendeva valorizzare il ruolo di questa figura professionale all’interno di un istituto clinico e di ricerca. Da ultimo, possiamo rilevare come, durante i primi decenni di funzionamento della Clinica, le attività di ricerca venivano gestite in modo integrato e tutte le strutture di laboratorio (chimiche, fisiche, microbiologiche e istopatologiche) cooperavano, con le specifiche competenze, allo studio delle patologie professionali. In conclusione, anche se il primo moderno laboratorio di igiene industriale della Clinica del Lavoro di Milano venne istituito da Enrico C. Vigliani (1907-1992) nel 1948 , è possibile trovare alcuni esempi pioneristici di applicazione di tecniche chimiche allo studio delle malattie professionali già nei primi anni di vita dell’istituto milanese . Luigi Devoto fu tra i primi studiosi italiani a comprendere l’importanza del ruolo del laboratorio di chimica nella ricerca biomedica e, in particolare, a conferire alla chimica la stessa dignità delle diverse branche cliniche. Questa stessa attenzione nei confronti del laboratorio di igiene industriale e di tossicologia occupazionale caratterizza ancora oggi in tutta Italia la medicina del lavoro, che nella sua multidisciplinarietà vede lavorare fianco a fianco medici, chimici, fisici, geologi, esperti di scienze dell’alimentazione, così come accadeva cento anni fa nell’istituto di Lui-gi Devoto. Gli autori non hanno dichiarato alcun potenziale conflitto di interesse in relazione alle materie trattate nell’articolo
Impact of Systematic Joint Examination (Ultrasound, Functional and Physical) on Treatment Management Decisions in Patients With Haemophilia A in France: Final Data From the Prospective, Observational A‐MOVE Study
6773400b-b125-45ae-92a1-81989b98a608
11957421
Musculoskeletal System[mh]
Introduction Haemophilia A can result in bleeds into muscles and joints if inadequately managed; one joint bleed alone can cause permanent joint damage with recurrent joint bleeds leading to long‐term musculoskeletal damage [ , , ]. Most bleeds occur in mechanical or weightbearing joints (knees, elbows and ankles) . Prophylaxis with factor VIII (FVIII) replacement therapy, or other non‐factor agents, is currently the standard of care for the prevention of bleeding and preservation of long‐term joint health in haemophilia A [ , , ]. However, prophylactic treatment may not completely prevent joint bleeds; occasional clinical and subclinical bleeds may occur, resulting in progressive joint disease . Careful and regular joint health monitoring and patient follow‐up are crucial in the long‐term management of haemophilia . Changes in a patient's joint health may influence their physician's decision to adjust their haemophilia treatment and, importantly, can help identify early joint damage . Joint health can be assessed using several techniques, including functional/physical examinations (Haemophilia Joint Health Score; HJHS) and imaging examinations, such as ultrasound scoring systems (Haemophilia Early Arthropathy Detection with Ultrasound; HEAD‐US) . The HEAD‐US method allows direct examination of joint health, including the detection of soft tissue damage and peripheral cartilage pathology in early arthropathy, in conjunction with medical history and physical examination . However, these scoring tools are not used routinely in clinical practice, so it remains to be determined whether systemic joint examination using HJHS and/or HEAD‐US influences a physician's decisions on haemophilia management in persons with haemophilia A . Furthermore, the sensitivity of each tool to detect clinical and subclinical bleeds has not been directly compared [ , , , ]. Here, we report final data from the multicentre, prospective, low‐interventional A‐MOVE study (NCT04133883), which aimed to evaluate whether systematic joint examination with HJHS and/or HEAD‐US had an impact on treatment management decisions in persons with haemophilia A in France. Methods 2.1 Study Design and Participants The A‐MOVE (NCT04133883) study, conducted across 20 haemophilia treatment centres in France, aimed to prospectively evaluate if and how haemophilia treatment management decisions are impacted by systematic joint examination (ultrasound, functional and physical) in patients with haemophilia A over a 12‐month period from January 2020 to July 2022. Eligible patients were aged between 6 and 40 years, had haemophilia A of all severities, and were treated prophylactically or on demand with any FVIII product, plasma‐derived or recombinant (standard [SHL] or extended [EHL] half‐life), according to routine clinical practice in France. All patients had ≥1 joint bleeding episode in the 12 months prior to inclusion. Patients were excluded if they had inhibitors ≥0.6 BU/mL at the latest available inhibitor test, had joint surgery in the past year prior to inclusion, >1 joint replacement and/or were participating in another interventional study. See for further information on the study design, sites and eligibility criteria. 2.2 Outcome Measures The primary objective was to investigate if systematic joint examination with HJHS and HEAD‐US had an impact on haemophilia management decisions. The primary endpoint was assessed by investigating the change in haemophilia management decisions based on systematic joint examination of ankles, knees and elbows (as judged by the investigator), according to the following key questions: changes in haemophilia management (Yes/No); HJHS findings impacted the decision (Yes/No); and HEAD‐US findings impacted the decision (Yes/No). Patients from the Full Analysis Set (FAS; consisting of all patients enrolled in the study without major protocol violations) with changes in their haemophilia management due to HJHS and/or HEAD‐US examinations comprised the primary interest group. In these patients, the following secondary endpoints supporting the primary objective were assessed: main joint examination findings leading to a haemophilia management decision ( ) and changes to haemophilia management, including treatment regimen (on demand or prophylaxis), FVIII product used, prescribed FVIII product dose/dosing interval and other changes (physical therapy, contact with healthcare professional [HCP], pain and/or anti‐inflammatory medication, intra‐articular injections and surgical intervention). Additional post hoc analyses were performed in the remaining patients from the FAS as two subgroups: patients with haemophilia management changes based on factors other than HEAD‐US/HJHS, and those with no changes in their haemophilia management during the study (only baseline characteristics were analysed). Secondary endpoints for the FAS population included assessment of HJHS and HEAD‐US scores at baseline, 6 and 12 months, other factors impacting haemophilia management decisions, annualised bleeding rate (ABR), target joints and pain. Pre‐specified secondary endpoints are listed in the . 2.3 Data Collection At baseline, patient demographics and medical, surgical and haemophilia history were collected. At each study visit (baseline, 6 and 12 months), investigators performed systemic joint assessments using HJHS v2.1 (six index joint scores, range 0–20, total score range 0–120) and HEAD‐US protocol (six joint scores, range 0–8, total score range 0–48) . Investigators also performed a general physical examination according to routine clinical practice (recorded as ‘normal’ or ‘abnormal’ as per the investigators’ discretion), and patients completed patient‐reported outcomes (PROs) questionnaires for pain (brief pain inventory [BPI]) and functional ability (Haemophilia Activities List [HAL]/Paediatrics HAL [PedHAL]) at each study visit. Additionally, investigators documented if any changes were made to the patient's haemophilia management (Yes/No), the factors that impacted the change (HEAD‐US, HJHS or other) and the specific changes to their haemophilia management decision (including change in treatment regimen [on‐demand/prophylaxis] and dosing frequency). Further data regarding patients’ FVIII product prescription, bleeding episodes, pain, functional ability, general physical health and adverse events were also collected to support the joint assessment data. 2.4 Statistical Analysis All endpoints were evaluated using descriptive statistics; no formal statistical analyses were performed. Outcomes were summarised using descriptive statistics (mean, median, interquartile range [IQR] and range); no inferential statistics were performed. The sample size was based on feasibility and practical considerations. Study Design and Participants The A‐MOVE (NCT04133883) study, conducted across 20 haemophilia treatment centres in France, aimed to prospectively evaluate if and how haemophilia treatment management decisions are impacted by systematic joint examination (ultrasound, functional and physical) in patients with haemophilia A over a 12‐month period from January 2020 to July 2022. Eligible patients were aged between 6 and 40 years, had haemophilia A of all severities, and were treated prophylactically or on demand with any FVIII product, plasma‐derived or recombinant (standard [SHL] or extended [EHL] half‐life), according to routine clinical practice in France. All patients had ≥1 joint bleeding episode in the 12 months prior to inclusion. Patients were excluded if they had inhibitors ≥0.6 BU/mL at the latest available inhibitor test, had joint surgery in the past year prior to inclusion, >1 joint replacement and/or were participating in another interventional study. See for further information on the study design, sites and eligibility criteria. Outcome Measures The primary objective was to investigate if systematic joint examination with HJHS and HEAD‐US had an impact on haemophilia management decisions. The primary endpoint was assessed by investigating the change in haemophilia management decisions based on systematic joint examination of ankles, knees and elbows (as judged by the investigator), according to the following key questions: changes in haemophilia management (Yes/No); HJHS findings impacted the decision (Yes/No); and HEAD‐US findings impacted the decision (Yes/No). Patients from the Full Analysis Set (FAS; consisting of all patients enrolled in the study without major protocol violations) with changes in their haemophilia management due to HJHS and/or HEAD‐US examinations comprised the primary interest group. In these patients, the following secondary endpoints supporting the primary objective were assessed: main joint examination findings leading to a haemophilia management decision ( ) and changes to haemophilia management, including treatment regimen (on demand or prophylaxis), FVIII product used, prescribed FVIII product dose/dosing interval and other changes (physical therapy, contact with healthcare professional [HCP], pain and/or anti‐inflammatory medication, intra‐articular injections and surgical intervention). Additional post hoc analyses were performed in the remaining patients from the FAS as two subgroups: patients with haemophilia management changes based on factors other than HEAD‐US/HJHS, and those with no changes in their haemophilia management during the study (only baseline characteristics were analysed). Secondary endpoints for the FAS population included assessment of HJHS and HEAD‐US scores at baseline, 6 and 12 months, other factors impacting haemophilia management decisions, annualised bleeding rate (ABR), target joints and pain. Pre‐specified secondary endpoints are listed in the . Data Collection At baseline, patient demographics and medical, surgical and haemophilia history were collected. At each study visit (baseline, 6 and 12 months), investigators performed systemic joint assessments using HJHS v2.1 (six index joint scores, range 0–20, total score range 0–120) and HEAD‐US protocol (six joint scores, range 0–8, total score range 0–48) . Investigators also performed a general physical examination according to routine clinical practice (recorded as ‘normal’ or ‘abnormal’ as per the investigators’ discretion), and patients completed patient‐reported outcomes (PROs) questionnaires for pain (brief pain inventory [BPI]) and functional ability (Haemophilia Activities List [HAL]/Paediatrics HAL [PedHAL]) at each study visit. Additionally, investigators documented if any changes were made to the patient's haemophilia management (Yes/No), the factors that impacted the change (HEAD‐US, HJHS or other) and the specific changes to their haemophilia management decision (including change in treatment regimen [on‐demand/prophylaxis] and dosing frequency). Further data regarding patients’ FVIII product prescription, bleeding episodes, pain, functional ability, general physical health and adverse events were also collected to support the joint assessment data. Statistical Analysis All endpoints were evaluated using descriptive statistics; no formal statistical analyses were performed. Outcomes were summarised using descriptive statistics (mean, median, interquartile range [IQR] and range); no inferential statistics were performed. The sample size was based on feasibility and practical considerations. Results 3.1 Study Population Overall, the A‐MOVE study enrolled 92 persons with haemophilia A of all severities, across 20 haemophilia treatment centres in France from 13 January 2020 to 13 July 2022; participating centres are listed in the . Overall, 86 male patients were included in the FAS ( n = 6 patients excluded due to major protocol deviation); 68 had HJHS/HEAD‐US assessments at 12 months, and 66 patients completed the study. Of the 20 patients who discontinued the study, five were due to switch to non‐factor therapy (Table ). In the FAS, most patients had severe haemophilia A (64.0%, n = 55) and the median (range) age was 19 (7–40) years; 38 patients (44.2%) were <18 years (Table ). At enrolment, 64 patients (74.4%) received prophylaxis ( n = 52 severe, n = 11 moderate, n = 1 mild haemophilia); 22 patients (25.6%) received on‐demand treatment (Table ). In the 3 months prior to baseline, 17 patients (19.8%) received pain and/or anti‐inflammatory medication. The subgroups in this analysis included 20 patients (23.3%) with changes in haemophilia management due to joint assessments (HJHS/HEAD‐US), 19 (22.1%) with changes in haemophilia management due to findings other than joint assessments, and 42 (48.8%) with no changes in haemophilia management. Baseline demographics and characteristics of the overall population and subgroups are reported in Table . 3.2 Impact of HJHS/HEAD‐US and Other Factors on Haemophilia Management Over the evaluation period, 21/86 patients (24%) experienced an impact on their haemophilia management due to functional or physical examination (HJHS) and/or ultrasound examination (HEAD‐US; Table ). Overall, 11/21 patients (52.4%) experienced haemophilia management changes based on HJHS results over the total evaluation period ( n = 4 changes at baseline, n = 5 at 6 months and n = 7 at 12 months); five patients experienced changes due to global gait scores, and eight were due to joint score items ( n = 2 patients experienced changes due to both components). In total, 20/21 (95.2%) patients experienced changes based on HEAD‐US ( n = 12 at baseline, n = 7 at 6 months and n = 6 of the patients at 12 months). Of these, 13/20 (65.0%) scored for synovial hypertrophy, 10/20 (50%) for cartilage degeneration and 2/20 (10.0%) for bone irregularities. Joint health evaluation using HJHS and HEAD‐US together had an impact on management decisions in 47.6% of patients ( n = 10/21). Approximately a third of patients in the FAS (33.7%; n = 29) had haemophilia management decisions impacted by factors other than HJHS/HEAD‐US (Table ), including physical examination findings on extremities and joints ( n = 9), the occurrence of bleeding episodes ( n = 8) and physical activity levels ( n = 8). Total joint scores, as well as by individual joints (ankle, knee and elbow), are presented in Table . 3.3 Changes to FVIII Treatment Impacted by HJHS/HEAD‐US Of the patients with treatment changes impacted by HJHS/HEAD‐US, 7/20 (35.0%) had changes to their FVIII treatment including the regimen, product, dose or dosing interval. 3.3.1 Treatment Regimen Most patients with treatment changes impacted by HJHS/HEAD‐US received FVIII prophylaxis during the whole study period ( n = 15 [75.0%] at baseline and 6 months, n = 12 [70.6%] at 12 months). Most patients received primary or secondary prophylaxis (for both, n = 7 [46.7%] at baseline and 6 months, n = 6 [50.0%] at 12 months); some patients received on‐demand treatment ( n = 5 [25.0%] at baseline and 6 months, n = 4 [23.5%] at 12 months). One patient experienced a change in treatment regimen due to systematic joint assessment at the baseline visit. This patient was switched from on‐demand to once‐weekly prophylactic treatment with the same SHL FVIII product (turoctocog alfa). No other changes to the treatment regimen occurred due to systematic joint assessment. 3.3.2 FVIII Product Of patients with treatment changes impacted by HJHS/HEAD‐US, 13 (65.0%) received rFVIIIFc ( n = 11 prophylactic, n = 2 on demand) and the remaining seven received SHL FVIII products. Of the seven patients with changes in FVIII treatment due to HJHS/HEAD‐US, six received prophylactic treatment ( n = 5 with rFVIIIFc), and one patient received on‐demand treatment (with turoctocog alfa) at baseline. There was no product change among these patients due to systematic joint assessments. 3.3.3 Prescribed Dose and Dosing Interval For the five patients treated with rFVIIIFc, the prescribed weekly dose ranged from 61 to 130 IU/kg/week. The mean (SD) prescribed dose per injection of rFVIIIFc prophylaxis was 40.5 (11.5) IU/kg at baseline, mostly administered twice weekly. The prescribed weekly dose for the one patient treated with turoctocog alfa ranged from 26 to 31 IU/kg/week, and 86 to 117 IU/kg/week for the one patient treated with octocog alfa. Six patients had changes in dosing and frequency of FVIII products, all due to systematic joint assessment at baseline; further details are provided in the . 3.4 Other Changes to Haemophilia Management In the subgroup of patients with haemophilia management changes impacted by HJHS/HEAD‐US ( n = 20), the most frequent changes other than to FVIII treatment were increased physical therapy regimen ( n = 8; 40.0%) and increased contact with the HCP ( n = 5; 25.0%; Figure ). For the subgroup of patients impacted by other factors than HJHS/HEAD‐US ( n = 19), management changes included change to physical therapy regimen ( n = 7; 36.8%), change in contact with the HCP ( n = 7; 36.8%) and change in prescribed anti‐inflammatory or pain medication ( n = 2; 10.5%; Figure ). 3.5 Joint Health In the FAS, both the mean HJHS total joint score (range 0–120) and mean HEAD‐US total score (range 0–48) decreased from baseline to 12 months (Figure ). However, comparisons over time were limited as some patients did not have available data at 12 months. Joint health scores according to treatment management changes (Yes/No) are presented in Figure . One patient with haemophilia management changes impacted by HJHS/HEAD‐US developed a new target joint, recorded at 12 months. Further details are presented in the . Additional secondary endpoint results (PRO for pain, ABR, pain and inflammation concomitant medication use and PRO for functional ability) can be found in the . 3.6 Safety Of the patients treated with rFVIIIFc, one paediatric male patient with moderate haemophilia at enrolment experienced a serious AE (low‐titre FVIII inhibitor; 0.9 Bethesda Unit [BU/mL]) and therefore stopped rFVIIIFc treatment and discontinued the study. The event was considered resolved with a negative titre (<0.6 BU/mL) at a later timepoint. No other serious AEs or non‐serious AEs leading to permanent rFVIIIFc treatment discontinuation were reported in the study. Study Population Overall, the A‐MOVE study enrolled 92 persons with haemophilia A of all severities, across 20 haemophilia treatment centres in France from 13 January 2020 to 13 July 2022; participating centres are listed in the . Overall, 86 male patients were included in the FAS ( n = 6 patients excluded due to major protocol deviation); 68 had HJHS/HEAD‐US assessments at 12 months, and 66 patients completed the study. Of the 20 patients who discontinued the study, five were due to switch to non‐factor therapy (Table ). In the FAS, most patients had severe haemophilia A (64.0%, n = 55) and the median (range) age was 19 (7–40) years; 38 patients (44.2%) were <18 years (Table ). At enrolment, 64 patients (74.4%) received prophylaxis ( n = 52 severe, n = 11 moderate, n = 1 mild haemophilia); 22 patients (25.6%) received on‐demand treatment (Table ). In the 3 months prior to baseline, 17 patients (19.8%) received pain and/or anti‐inflammatory medication. The subgroups in this analysis included 20 patients (23.3%) with changes in haemophilia management due to joint assessments (HJHS/HEAD‐US), 19 (22.1%) with changes in haemophilia management due to findings other than joint assessments, and 42 (48.8%) with no changes in haemophilia management. Baseline demographics and characteristics of the overall population and subgroups are reported in Table . Impact of HJHS/HEAD‐US and Other Factors on Haemophilia Management Over the evaluation period, 21/86 patients (24%) experienced an impact on their haemophilia management due to functional or physical examination (HJHS) and/or ultrasound examination (HEAD‐US; Table ). Overall, 11/21 patients (52.4%) experienced haemophilia management changes based on HJHS results over the total evaluation period ( n = 4 changes at baseline, n = 5 at 6 months and n = 7 at 12 months); five patients experienced changes due to global gait scores, and eight were due to joint score items ( n = 2 patients experienced changes due to both components). In total, 20/21 (95.2%) patients experienced changes based on HEAD‐US ( n = 12 at baseline, n = 7 at 6 months and n = 6 of the patients at 12 months). Of these, 13/20 (65.0%) scored for synovial hypertrophy, 10/20 (50%) for cartilage degeneration and 2/20 (10.0%) for bone irregularities. Joint health evaluation using HJHS and HEAD‐US together had an impact on management decisions in 47.6% of patients ( n = 10/21). Approximately a third of patients in the FAS (33.7%; n = 29) had haemophilia management decisions impacted by factors other than HJHS/HEAD‐US (Table ), including physical examination findings on extremities and joints ( n = 9), the occurrence of bleeding episodes ( n = 8) and physical activity levels ( n = 8). Total joint scores, as well as by individual joints (ankle, knee and elbow), are presented in Table . Changes to FVIII Treatment Impacted by HJHS/HEAD‐US Of the patients with treatment changes impacted by HJHS/HEAD‐US, 7/20 (35.0%) had changes to their FVIII treatment including the regimen, product, dose or dosing interval. 3.3.1 Treatment Regimen Most patients with treatment changes impacted by HJHS/HEAD‐US received FVIII prophylaxis during the whole study period ( n = 15 [75.0%] at baseline and 6 months, n = 12 [70.6%] at 12 months). Most patients received primary or secondary prophylaxis (for both, n = 7 [46.7%] at baseline and 6 months, n = 6 [50.0%] at 12 months); some patients received on‐demand treatment ( n = 5 [25.0%] at baseline and 6 months, n = 4 [23.5%] at 12 months). One patient experienced a change in treatment regimen due to systematic joint assessment at the baseline visit. This patient was switched from on‐demand to once‐weekly prophylactic treatment with the same SHL FVIII product (turoctocog alfa). No other changes to the treatment regimen occurred due to systematic joint assessment. 3.3.2 FVIII Product Of patients with treatment changes impacted by HJHS/HEAD‐US, 13 (65.0%) received rFVIIIFc ( n = 11 prophylactic, n = 2 on demand) and the remaining seven received SHL FVIII products. Of the seven patients with changes in FVIII treatment due to HJHS/HEAD‐US, six received prophylactic treatment ( n = 5 with rFVIIIFc), and one patient received on‐demand treatment (with turoctocog alfa) at baseline. There was no product change among these patients due to systematic joint assessments. 3.3.3 Prescribed Dose and Dosing Interval For the five patients treated with rFVIIIFc, the prescribed weekly dose ranged from 61 to 130 IU/kg/week. The mean (SD) prescribed dose per injection of rFVIIIFc prophylaxis was 40.5 (11.5) IU/kg at baseline, mostly administered twice weekly. The prescribed weekly dose for the one patient treated with turoctocog alfa ranged from 26 to 31 IU/kg/week, and 86 to 117 IU/kg/week for the one patient treated with octocog alfa. Six patients had changes in dosing and frequency of FVIII products, all due to systematic joint assessment at baseline; further details are provided in the . Treatment Regimen Most patients with treatment changes impacted by HJHS/HEAD‐US received FVIII prophylaxis during the whole study period ( n = 15 [75.0%] at baseline and 6 months, n = 12 [70.6%] at 12 months). Most patients received primary or secondary prophylaxis (for both, n = 7 [46.7%] at baseline and 6 months, n = 6 [50.0%] at 12 months); some patients received on‐demand treatment ( n = 5 [25.0%] at baseline and 6 months, n = 4 [23.5%] at 12 months). One patient experienced a change in treatment regimen due to systematic joint assessment at the baseline visit. This patient was switched from on‐demand to once‐weekly prophylactic treatment with the same SHL FVIII product (turoctocog alfa). No other changes to the treatment regimen occurred due to systematic joint assessment. FVIII Product Of patients with treatment changes impacted by HJHS/HEAD‐US, 13 (65.0%) received rFVIIIFc ( n = 11 prophylactic, n = 2 on demand) and the remaining seven received SHL FVIII products. Of the seven patients with changes in FVIII treatment due to HJHS/HEAD‐US, six received prophylactic treatment ( n = 5 with rFVIIIFc), and one patient received on‐demand treatment (with turoctocog alfa) at baseline. There was no product change among these patients due to systematic joint assessments. Prescribed Dose and Dosing Interval For the five patients treated with rFVIIIFc, the prescribed weekly dose ranged from 61 to 130 IU/kg/week. The mean (SD) prescribed dose per injection of rFVIIIFc prophylaxis was 40.5 (11.5) IU/kg at baseline, mostly administered twice weekly. The prescribed weekly dose for the one patient treated with turoctocog alfa ranged from 26 to 31 IU/kg/week, and 86 to 117 IU/kg/week for the one patient treated with octocog alfa. Six patients had changes in dosing and frequency of FVIII products, all due to systematic joint assessment at baseline; further details are provided in the . Other Changes to Haemophilia Management In the subgroup of patients with haemophilia management changes impacted by HJHS/HEAD‐US ( n = 20), the most frequent changes other than to FVIII treatment were increased physical therapy regimen ( n = 8; 40.0%) and increased contact with the HCP ( n = 5; 25.0%; Figure ). For the subgroup of patients impacted by other factors than HJHS/HEAD‐US ( n = 19), management changes included change to physical therapy regimen ( n = 7; 36.8%), change in contact with the HCP ( n = 7; 36.8%) and change in prescribed anti‐inflammatory or pain medication ( n = 2; 10.5%; Figure ). Joint Health In the FAS, both the mean HJHS total joint score (range 0–120) and mean HEAD‐US total score (range 0–48) decreased from baseline to 12 months (Figure ). However, comparisons over time were limited as some patients did not have available data at 12 months. Joint health scores according to treatment management changes (Yes/No) are presented in Figure . One patient with haemophilia management changes impacted by HJHS/HEAD‐US developed a new target joint, recorded at 12 months. Further details are presented in the . Additional secondary endpoint results (PRO for pain, ABR, pain and inflammation concomitant medication use and PRO for functional ability) can be found in the . Safety Of the patients treated with rFVIIIFc, one paediatric male patient with moderate haemophilia at enrolment experienced a serious AE (low‐titre FVIII inhibitor; 0.9 Bethesda Unit [BU/mL]) and therefore stopped rFVIIIFc treatment and discontinued the study. The event was considered resolved with a negative titre (<0.6 BU/mL) at a later timepoint. No other serious AEs or non‐serious AEs leading to permanent rFVIIIFc treatment discontinuation were reported in the study. Discussion Final data from the large, multicentre A‐MOVE study in France indicated that systematic joint assessments, through functional/physical examination and ultrasound (across ankles, knees and elbows), may impact treatment management decisions in persons with haemophilia A. Findings indicate that regular monitoring of joint health in general practice may lead to haemophilia management change. Although the World Federation of Hemophilia (WFH) recommends annual assessment and documentation of musculoskeletal and overall health of patients , imaging tools were not routinely used in clinical practice in many places during this study. To our knowledge, this is the first study of its kind to prospectively evaluate the impact of systematic joint examination (functional and structural) in France. Over the 12‐month evaluation period in A‐MOVE, almost a quarter of patients (24.4%) had an impact on their haemophilia management due to HEAD‐US or HJHS findings. The most frequent changes were to the dose and dosing frequency of FVIII replacement therapy (no FVIII product changes), prescribed physiotherapy and contact with HCPs. Comparatively, few patients experienced changes to prescribed anti‐inflammatory or pain medication or had changes to prescribed surgical interventions or any intra‐articular injections. Approximately a third of patients (33.7%) had haemophilia management decisions impacted by factors other than HJHS/HEAD‐US, such as general physical examination findings on extremities and joints, bleeding episodes, physical activity levels, change in body weight and FVIII activity level. More patients had changes to their haemophilia management due to HEAD‐US findings ( n = 20) compared to HJHS ( n = 11). This finding is expected, given the higher sensitivity of ultrasound methods such as HEAD‐US, which can diagnose early joint abnormalities and may help clinicians decide whether to adjust individual treatment management earlier . Moreover, at baseline, more patients had already recorded an impact of HEAD‐US on their haemophilia management compared to HJHS. It is known that prophylaxis is most efficient when started at an early age and before the first joint bleed. Therefore, early detection and diagnosis of joint abnormalities may be critical for preserving joint function . Previous studies report that HJHS shows strong convergent and discriminant construct validity in the detection of arthropathy . Physical examination scales such as the HJHS, although successfully able to assess joint outcomes, are known to have lower sensitivity compared with imaging techniques . Several studies have reported weak or no correlation between bleeding outcomes and HJHS, with asymptomatic and subclinical bleeds not being identified by physical examinations alone . Meanwhile, previous studies suggest that HEAD‐US can detect joint abnormalities with greater sensitivity compared with HJHS [ , , ]. Overall, in A‐MOVE, patients (most treated with rFVIIIFc) showed good responses during the study follow‐up in terms of joint scores and secondary endpoints such as bleeding rates, pain outcomes and use of anti‐inflammatory and anti‐pain medication. These findings are in line with previous studies of patients treated with rFVIIIFc for up to 12 months, which demonstrated low joint bleeding rates and well‐preserved joint status . The lack of impact of either joint health assessments or other factors on treatment management decisions in approximately half of patients (48.8%) could suggest adequate management of haemophilia symptoms in these patients; however, a greater proportion of joint bleeds and target joints were reported in this group at baseline. Although beyond the scope of the present study, further investigation into the treatment of these patients could provide more detailed answers in this population. This large, multicentre, prospective study, which included patients of all disease severities, had several strengths. A‐MOVE is the first study assessing the impact of joint assessment with ultrasound on treatment decision‐making in haemophilia care. Further, the broad inclusion criteria allowed a large representation of the haemophilia A population. The potential patient selection bias is an inherent limitation of this observational study. Only patients with haemophilia A were included, with no data reported for patients with haemophilia B. Furthermore, the choice of participating centres and the tendency of more severely treated patients requiring treatment management than patients with less severe haemophilia could pose further selection bias. The number of patients in each severity group was relatively low, therefore conclusive recommendations and comparisons relating to the severity of haemophilia cannot be made. Additionally, comparisons of PROs (including bleeding and pain outcomes) over time were limited as some patients had no available data at 12 months (79% of patients [ n = 68/86] had available data). The exclusion of patients treated with non‐factor therapies further limits the comparisons of the results to wider treatment management strategies. Conclusions Final data from the A‐MOVE study show that systematic joint assessments, through functional/physical examination and ultrasound, may impact treatment management decisions in persons with haemophilia A. Regular monitoring of joint health using HEAD‐US and HJHS could help physicians decide whether to adjust individual treatment management earlier. Overall, these results highlight the importance of joint assessments in persons with haemophilia. Substantial contributions to study conception and design: N.D., V.B., J.B.V., R.J., A.L., B.P.P., S.M.C., B.F., L.F., S.J.J., H.C., M.A., M.F., C.G., M.Z., O.M. and Y.R. Substantial contributions to analysis and interpretation of the data: N.D., V.B., J.B.V., R.J., A.L., B.P.P., S.M.C., B.F., L.F., S.J.J., H.C., M.A., M.F., C.G., M.Z., O.M. and Y.R. Drafting the article or revising it critically for important intellectual content: N.D., V.B., J.B.V., R.J., A.L., B.P.P., S.M.C., B.F., L.F., S.J.J., H.C., M.A., M.F., C.G., M.Z., O.M. and Y.R. Final approval of the version of the article to be published: N.D., V.B., J.B.V., R.J., A.L., B.P.P., S.M.C., B.F., L.F., S.J.J., H.C., M.A., M.F., C.G., M.Z., O.M. and Y.R. A‐MOVE study protocol was approved by institutional review boards and/or ethics committees at participating institutions. Patients/their guardians provided written informed consent prior to participation; if appropriate, adolescent/paediatric patients also provided assent. A‐MOVE was conducted in accordance with the International Conference on Harmonisation Guidelines for Good Clinical Practice and ethical principles that comply with the Declaration of Helsinki and is registered with ClinicalTrials.gov (NCT04133883). Nicolas Drillaud: Research grants from Novo Nordisk; speaker/honoraria for Octapharma, Roche Chugaï and Sobi. Virginie Barbay: Speaker/honoraria for LFB; consulting fees from Sobi; grant/research support from CSL Behring. Jean Baptiste Valentin: Consultant for Sobi, has received hospitality from Octapharma, Roche and CSL Behring. Romain Jailler: Research grants from Sobi. Aurélien Lebreton: Grant/research support from CSL Behring, Novo Nordisk, Octapharma and Sobi; consultant for Bayer, LFB, Octapharma, Pfizer, Roche and Sobi. Brigitte Pan‐Petesch: Consultant for BioMarin, CSL Berhing, Novo Nordisk, Roche/Chugai, Sobi and Takeda. Sabine Marie Castet: Consultant (advisory board honoraria or invitation as speaker in symposia) for CSL Behring, LFB, Novo Nordisk, Roche, Sobi and Takeda. Birgit Frotscher: Consultant for BioMarin, CSL Behring, Novo Nordisk, Sobi and Takeda. Laurent Frenzel: Consulting fees are from CSL Behring, Pfizer, Roche and Sobi. Sandrine Jousse‐Joulin: No conflicts of interest to declare. Hervé Chambost: Consulting fees from BioMarin, CSL Behring, Pfizer, Roche Chugai and Sobi; payment/honoraria for lectures/speakers bureau from BioMarin, CSL, Roche Chugai and Sobi; payment for expert testimony from BioMarin; support for attending meetings from BioMarin, Novo Nordisk, Roche and Sobi. Mikaela Alenäs: Contractor for Sobi; consultant for Aixial Group. Markus Fusser, Corinne Gandossi, Meriem Zidi, Oussama Mahdout: Employees and/or shareholders of Sobi. Yohann Repessé: Grant/research support from CSL Behring and Octapharma; consultant for (scientific advisory board honoraria) LFB, Roche and Sobi. Supplementary Appendix
Identifying the Shared Metabolite Biomarkers and Potential Intervention Targets for Multiple Sarcopenia-Related Phenotypes
a263cda3-15ea-4d64-b82c-1eb558d4e27b
11594328
Biochemistry[mh]
Sarcopenia is characterized by a generalized and progressive skeletal muscle strength and mass decline and is primarily observed in the elderly . Currently, most criteria have used combinations of muscle mass and strength as evaluation measures of sarcopenia [ , , ]. The estimated prevalence of sarcopenia in populations of European ancestry is 23% for those older than 60 and 18% for those younger than 60, which also supports the idea that sarcopenia usually appears early in life . Observational studies indicate that risk factors for sarcopenia include insufficient physical activity, malnutrition, smoking, and diabetes and that sarcopenia has also been linked to a range of negative health outcomes . Therefore, the exploration of early biomarkers for sarcopenia holds the potential to advance the research on etiology, risk prediction, and precision prevention. The human metabolome comprises products of metabolic processes, including intermediates and end products, which collectively present a metabolic fingerprint of an individual . The development of omics-based technologies offers a window to unveil the complex mechanisms and pathological processes of diseases or phenotypes . Specifically, metabolomics can reveal changes in intermediate metabolites or metabolic pathways to find new insights into diseases. Abnormal metabolism has been implicated in various diseases . Evidence from observational metabolomics studies with candidate approaches supports an association between some metabolites and sarcopenia-related phenotypes [ , , ]. However, previous studies based on cross-sectional design restrict causal inference, despite evidence indicating that broader metabolic disturbances occur in wasting. Additionally, most studies are limited to candidate metabolites, restricting a comprehensive understanding of the relationship between metabolites and sarcopenia. Recent studies also implied causal associations of several metabolites with sarcopenia, such as isovalerylcarnitine, docosapentaenoate, glycine, 1-arachidonoylglycerophosphocholine, pentadecanoate, 3-dehydrocarnitine, and epiandrosterone sulfate [ , , ]. However, there is a lack of insight into the specific and shared metabolic markers between sarcopenia-related phenotypes, and sex differences in the association of metabolites with sarcopenia are not well defined. Recent large-scale GWASs (1091 metabolites, 690 of which had available mQTL) have significantly advanced our knowledge of the genetic architecture underlying the human metabolome . Meanwhile, the availability of muscle strength and mass phenotypes (HGS, ALM, WBLM, and usual walking pace) with associated genotypes provides a valuable resource for evaluating causal association by MR. MR employs genetic variants as proxies to achieve robust causal inference between a given exposure and the outcome with minimal reverse causation or confounding effects . Therefore, we conducted a metabolome-wide MR analysis to assess the causal association between metabolites and sarcopenia-related phenotypes and to explore the cross-sarcopenia effects of metabolites. Subsequently, we further investigate the association across different genders. 2.1. Metabolome-Wide MR Identified 118 Sarcopenia-Related Metabolites The reporting of this MR study followed the STROBE-MR checklist ( ). The data sources and detailed information are presented in . The F-statistic for the genetic instruments was larger than the normally selected value of 10, suggesting no weak instrumental variable ( ). The number of nominally significant metabolites ( p < 0.05) across the sarcopenia traits ranged from 239 for ALM to 73 for usual walking pace ( ). There were 118 metabolites (95 known and 23 unknown metabolites) significantly associated with four phenotypes after multiple testing corrections, including 78 known metabolites for ALM, 11 known metabolites for HGS, 47 known metabolites for WBLM, and 1 known metabolite for usual walking pace ( ). Among these known metabolites, no significant pleiotropy or heterogeneity was observed ( ). In the sex-stratified analyses, we observed significant associations between 50, 19, 3, and 1 known metabolite and ALM, WBLM, HGS, and usual walking pace in males, respectively ( and ). In females, we observed significant associations between 64, 23, and 8 known metabolites and ALM, WBLM, and HGS, respectively ( and ). For ALM, HGS, and WBLM, 34, 13, and 6 known metabolites were significant in both the male and female stratified analyses, respectively. There were 16 and 6 known metabolites that were significantly associated with ALM and WBLM only in males. For ALM, WBLM, and HGS, 30, 10, and 5 known metabolites were significant only in females, respectively. Among the 11 known metabolites significantly associated with HGS, 8 metabolites were subsequently validated. The genetically determined levels of myristoylcarnitine, arachidoylcarnitine, S-adenosylhomocysteine (SAH), and 3-hydroxyoleoylcarnitine were positively related to the risk of HGS weakness [OR (95 CI): 1.26 (1.11–1.43), 1.19 (1.07–1.34), 1.15 (1.06–1.25), and 1.20 (1.07–1.36), respectively. The genetically determined levels of four metabolites (glycine, isovalerylglycine, cinnamoylglycine, and gamma-glutamylglycine) were negatively correlated with the risk of HGS weakness ( ). 2.2. Colocalization Analysis Supports 27 Known Metabolites Among the 95 known metabolites associated with sarcopenia-related phenotypes, 27 metabolites had strong colocalization evidence (PP 4 > 0.80) under different windows (±250 kb or ±500 kb) and priors (P 12 = 1 × 10 −5 or P 12 = 1 × 10 −6 ), suggesting the high probability of a shared causal variant between metabolites and sarcopenia-related phenotypes ( ). Specifically, there is significant colocalization support for the associations of 23, 6, and 15 known metabolites with ALM ( ), HGS ( ), and WBLM ( ), respectively. Totals of 23 and 13 metabolites were significantly associated with at least one sarcopenia-related phenotype in males and females, respectively, with colocalization evidence supports. Among them, 12 metabolites were significant in both sexes, primarily falling into the amino acids and their derivatives, categories of carnitines and their derivatives, and carbohydrate metabolites. The male-specific sarcopenia-related metabolites included myristoylcarnitine, propionylglycine, sphingomyelin, isobutyrylglycine, arachidoylcarnitine, 1-lignoceroyl-GPC (24:0), N-acetylglycine, carnitine, arachidonoylcholine, trans-2-hexenoylglycine, and 3-hydroxyoleoylcarnitine, while the only female-specific sarcopenia-related metabolite was 2-O-methylascorbic acid. 2.3. Thirteen Metabolites with Robust Colocalization Evidence Have Cross-Sarcopenia Effect Among 27 identified sarcopenia-related metabolites, 13 metabolites showed cross-sarcopenia effects. Specifically, the genetically determined plasma myristoylcarnitine level was related to lower ALM, WBLM, and HGS. The higher levels of five other metabolites (glycine, isovalerylglycine, propionylglycine, gamma-glutamylglycine, and cinnamoylglycine) were associated with higher ALM, WBLM, and HGS. The higher levels of mannose, creatine, and mannonate were associated with higher ALM and WBLM. The lower levels of four other metabolites (beta-hydroxyisovaleroylcarnitine, (R)-3-hydroxybutyrylcarnitine, (S)-hydroxybutyrylcarnitine, and acetylcarnitine) were associated with higher ALM and WBLM ( ). 2.4. Metabolic Pathway Analysis Metabolic pathway analysis was performed among the metabolites with robust colocalization evidence using Metaconflict 5.0. A total of 12 metabolic pathways were detected from two databases, including 7 from both the KEGG and the Small Molecule Pathway databases, as well as 5 from the KEGG dataset. Four important metabolic pathways potentially involved in muscle mass were identified in the metabolic pathway analysis ( ). The results indicated that the “glycerophospholipid metabolism”, “glycine, serine and threonine metabolism”, “linoleic acid metabolism”, and “alpha-Linolenic acid metabolism” pathways might be relevant to ALM and WBLM ( p < 0.05). We did not find pathways associated with HGS and usual walking pace due to limited metabolites. 2.5. Six Potential Modifiable Factors Associated with Sarcopenia-Related Metabolites In the analysis of the associations of 38 modifiable factors with 27 sarcopenia-related metabolites, 6 modifiable factors (1 obesity-related factor: WHRadjBMI; 3 lifestyle factors: television watching, sleep duration, and smoking initiation; and 2 dietary factors: tea consumption and milk intake) were significantly associated with 7 metabolites. Among 13 metabolites with cross-sarcopenia effects, beta-hydroxyisovaleroylcarnitine was positively associated with smoking initiation and short sleep duration, while gamma-glutamylglycine, glycine, and mannose were positively related to milk intake and lower WHRadjBMI ( ). The reporting of this MR study followed the STROBE-MR checklist ( ). The data sources and detailed information are presented in . The F-statistic for the genetic instruments was larger than the normally selected value of 10, suggesting no weak instrumental variable ( ). The number of nominally significant metabolites ( p < 0.05) across the sarcopenia traits ranged from 239 for ALM to 73 for usual walking pace ( ). There were 118 metabolites (95 known and 23 unknown metabolites) significantly associated with four phenotypes after multiple testing corrections, including 78 known metabolites for ALM, 11 known metabolites for HGS, 47 known metabolites for WBLM, and 1 known metabolite for usual walking pace ( ). Among these known metabolites, no significant pleiotropy or heterogeneity was observed ( ). In the sex-stratified analyses, we observed significant associations between 50, 19, 3, and 1 known metabolite and ALM, WBLM, HGS, and usual walking pace in males, respectively ( and ). In females, we observed significant associations between 64, 23, and 8 known metabolites and ALM, WBLM, and HGS, respectively ( and ). For ALM, HGS, and WBLM, 34, 13, and 6 known metabolites were significant in both the male and female stratified analyses, respectively. There were 16 and 6 known metabolites that were significantly associated with ALM and WBLM only in males. For ALM, WBLM, and HGS, 30, 10, and 5 known metabolites were significant only in females, respectively. Among the 11 known metabolites significantly associated with HGS, 8 metabolites were subsequently validated. The genetically determined levels of myristoylcarnitine, arachidoylcarnitine, S-adenosylhomocysteine (SAH), and 3-hydroxyoleoylcarnitine were positively related to the risk of HGS weakness [OR (95 CI): 1.26 (1.11–1.43), 1.19 (1.07–1.34), 1.15 (1.06–1.25), and 1.20 (1.07–1.36), respectively. The genetically determined levels of four metabolites (glycine, isovalerylglycine, cinnamoylglycine, and gamma-glutamylglycine) were negatively correlated with the risk of HGS weakness ( ). Among the 95 known metabolites associated with sarcopenia-related phenotypes, 27 metabolites had strong colocalization evidence (PP 4 > 0.80) under different windows (±250 kb or ±500 kb) and priors (P 12 = 1 × 10 −5 or P 12 = 1 × 10 −6 ), suggesting the high probability of a shared causal variant between metabolites and sarcopenia-related phenotypes ( ). Specifically, there is significant colocalization support for the associations of 23, 6, and 15 known metabolites with ALM ( ), HGS ( ), and WBLM ( ), respectively. Totals of 23 and 13 metabolites were significantly associated with at least one sarcopenia-related phenotype in males and females, respectively, with colocalization evidence supports. Among them, 12 metabolites were significant in both sexes, primarily falling into the amino acids and their derivatives, categories of carnitines and their derivatives, and carbohydrate metabolites. The male-specific sarcopenia-related metabolites included myristoylcarnitine, propionylglycine, sphingomyelin, isobutyrylglycine, arachidoylcarnitine, 1-lignoceroyl-GPC (24:0), N-acetylglycine, carnitine, arachidonoylcholine, trans-2-hexenoylglycine, and 3-hydroxyoleoylcarnitine, while the only female-specific sarcopenia-related metabolite was 2-O-methylascorbic acid. Among 27 identified sarcopenia-related metabolites, 13 metabolites showed cross-sarcopenia effects. Specifically, the genetically determined plasma myristoylcarnitine level was related to lower ALM, WBLM, and HGS. The higher levels of five other metabolites (glycine, isovalerylglycine, propionylglycine, gamma-glutamylglycine, and cinnamoylglycine) were associated with higher ALM, WBLM, and HGS. The higher levels of mannose, creatine, and mannonate were associated with higher ALM and WBLM. The lower levels of four other metabolites (beta-hydroxyisovaleroylcarnitine, (R)-3-hydroxybutyrylcarnitine, (S)-hydroxybutyrylcarnitine, and acetylcarnitine) were associated with higher ALM and WBLM ( ). Metabolic pathway analysis was performed among the metabolites with robust colocalization evidence using Metaconflict 5.0. A total of 12 metabolic pathways were detected from two databases, including 7 from both the KEGG and the Small Molecule Pathway databases, as well as 5 from the KEGG dataset. Four important metabolic pathways potentially involved in muscle mass were identified in the metabolic pathway analysis ( ). The results indicated that the “glycerophospholipid metabolism”, “glycine, serine and threonine metabolism”, “linoleic acid metabolism”, and “alpha-Linolenic acid metabolism” pathways might be relevant to ALM and WBLM ( p < 0.05). We did not find pathways associated with HGS and usual walking pace due to limited metabolites. In the analysis of the associations of 38 modifiable factors with 27 sarcopenia-related metabolites, 6 modifiable factors (1 obesity-related factor: WHRadjBMI; 3 lifestyle factors: television watching, sleep duration, and smoking initiation; and 2 dietary factors: tea consumption and milk intake) were significantly associated with 7 metabolites. Among 13 metabolites with cross-sarcopenia effects, beta-hydroxyisovaleroylcarnitine was positively associated with smoking initiation and short sleep duration, while gamma-glutamylglycine, glycine, and mannose were positively related to milk intake and lower WHRadjBMI ( ). In this study, we systematically investigated the causal association between 690 blood metabolites and sarcopenia-related phenotypes in both sexes, male and female, respectively. We identified 95 known metabolites associated with sarcopenia-related phenotypes, with the majority showing significance in sex-stratified analyses. A total of 27 metabolites were prioritized after Bayesian colocalization analysis, and 13 of these had a cross-sarcopenia effect. Additionally, 23 metabolites in the males and 13 in the females were associated with at least one sarcopenia phenotype, supported by strong colocalization evidence. Of these, 12 metabolites were same in the males and females. Glycine has been found to have potential effects on muscle health. Insufficient synthesis of glycine is associated with muscle weakness . A glycine supplement can protect muscles from the impact of various diseases, including cachexia [ , , ]. Gamma-glutamylglycine and gamma-glutamylthreonine were observed to be associated with muscle mass and strength. It can be resynthesized into glutathione under the catalytic action of gamma-glutamyl transpeptidase . Research suggests that glutathione may help maintain muscle health through antioxidants . Isovalerylglycine is commonly used for the screening of isovaleric acidemia in newborns. However, a previous study identified it as a nutritional and health-related biomarker for diagnosing or detecting resistance to diet-induced obesity . A higher isovalerylglycine level indicates an increased likelihood that obesity induced by a high-fat diet can be resisted . Cinnamoylglycine is typically formed through the metabolism of dietary polyphenols by gut microbiota, suggesting a potential role of dietary polyphenols or gut microbiota in maintaining muscle health . Contrary to our findings, Lustgarten et al. found through principal component analysis that the principal component containing cinnamoylglycine was negatively correlated with muscle strength in adults . The reason for this discrepancy may lie in the differences in research methodologies. Our results indicated a causal effect of acyl carnitines on sarcopenia-related phenotypes. Acyl carnitines are predominantly derived from skeletal muscles and play a crucial role in intracellular fatty acid metabolism and transport. Acyl carnitines accumulation is observed in certain conditions, such as type 2 diabetes, cardiac ischemia, or inherited fatty acid oxidation disorders , emphasizing the impact of these conditions on muscle function. Previous studies suggested that long-chain acyl carnitines are implicated in activating insulin resistance and inflammatory responses in muscle tissues [ , , , ]. Under pathological conditions, such as fatty acid oxidation disorders or myocardial ischemia, acyl carnitines may induce oxidative stress in muscle cells . In vitro experiments indicated that low concentrations of acyl carnitines may initiate Ca-dependent IL-6 responses by increasing Ca2+ in muscle cells, potentially resulting in muscle cell apoptosis . These findings collectively emphasize the significant impact of acyl carnitines on muscle physiology. As one of the most popular sports supplements, creatine could enhance adenosine triphosphate (ATP) resynthesis in cells, thereby contributing to improved physical performance, including maximal strength, maximal work output, sprinting performance, and fat-free mass . Our results found that sarcopenia-related metabolites were largely consistent across the sexes, though more sarcopenia-related metabolites were identified in the males than in the females. Although the underlying mechanisms remain unclear, we propose several possible explanations. First, sex differences in metabolite levels have been documented in prior studies, which show that males exhibit higher concentrations of metabolites, such as creatine and fatty acid oxidation-related products, including carnitines and acylcarnitine, compared to females . These differences may arise from estrogen regulation , inflammatory responses, muscle mass and body composition, as well as endogenous metabolite synthesis , potentially contributing to the stronger associations observed between some metabolites and sarcopenia in males. Additionally, male-specific environmental factors, such as high-intensity training and higher basal metabolic rates, may influence the concentration or functional activity of certain amino acid metabolites , which may also account for the observed gender difference. Several modifiable risk factors (WHRadjBMI, leisure television watching, sleep duration, smoking initiation, tea consumption, and milk intake) are associated with seven sarcopenia-related metabolites. Previous studies have also identified these factors as being associated with sarcopenia . However, our hypothesis is based on genetic variations as proxies, which limits our ability to fully evaluate potential confounding factors and only provides potential clues. Rigorously designed intervention studies are needed in the future to assess whether changes in lifestyle behaviors could affect the sarcopenia risk through influence-identified metabolites. This study has the following advantages: Firstly, we performed a comprehensive and systematic study to explore the causal association between plasma metabolites and the sarcopenia-related phenotypes, providing a comprehensive view with which to understand the etiological role of metabolites in muscle function. Secondly, the use of the MR and Bayesian colocalization methods reduced the impact of confounding factors and reverse causation, making the current study potentially more robust than observational studies. Thirdly, we identified modifiable factors of relevant metabolites, prioritizing potential interventional metabolite targets of lifestyle changes to treat sarcopenia. However, our study has weaknesses that warrant caution. Firstly, the current study focused on sarcopenia-related phenotypes rather than sarcopenia diagnosed by internationally recognized criteria, such as those defined by the EWGSOP or AWGS. This is due to the current lack of GWASs on sarcopenia as defined by the above standards. Moreover, the sarcopenia-related phenotypes we selected are common indicators in the internationally recognized criteria and have been widely used in previous research . Secondly, blood metabolites are commonly used for disease diagnosis or screening due to the accessibility and low invasiveness. Therefore, this study focuses on plasma metabolites, but evaluating the role of metabolites in other tissues, especially muscle tissue, could offer further perspectives on the pathology of sarcopenia. Thirdly, due to the current lack of GWASs on ALM, WBLM, and walking pace in other populations, we were only able to validate metabolites associated with handgrip strength. Our results will require further validation through experimental studies or population-based research in the future. Finally, since we used genetically determined metabolites to explore their association with sarcopenia-related phenotypes, we were unable to provide actual metabolite concentrations in sarcopenia patients. This limitation restricts the clinical applicability of our findings and calls for further research in the future. 4.1. Overall Study Design The study design is outlined in . Initially, we performed a two-sample MR analysis using genetic variants from the recently published metabolomics study as instrumental variables to explore the association between metabolites and sarcopenia-related phenotypes. Subsequently, we conducted colocalization analysis to verify the causal relationships. Metabolic pathway analysis was further conducted to identify the underlying metabolite pathways or groups that might be related to the biological process of sarcopenia. Finally, another MR analysis was conducted between the modifiable risk factors and sarcopenia-related metabolites to identify which metabolites might serve as potential interventional targets of modifiable lifestyle factors. 4.2. Data Sources and Study Population Instrumental variables for the human metabolome were derived from the recently published largest GWAS, which included 690 blood metabolites (571 known and 119 unknown) measured using the Metabolon HD4 platform among 8299 individuals of European ancestry . The mean age was 62.4 ± 9.9 years. As in previous studies, we did not use sarcopenia directly defined by the EWGSOP criteria as an outcome; instead, we analyzed related indicators of sarcopenia . We obtained summary statistics for single-nucleotide polymorphisms (SNPs) associated with sarcopenia-related phenotypes from previous GWASs . In brief, ALM and WBLM were selected as measures of muscle mass; both of these are valid predictors of sarcopenia . HGS and usual walking pace were selected as measures of muscle strength . We used the full summary statistics data from a GWAS on ALM, which contained 450,243 participants of European ancestry, and ALM was measured using bioelectrical impedance analysis (BIA) . For HGS, usual walking pace, and WBLM, full summary statistics data from UK Biobank (n = 461,089, 459,915 and 454,850, respectively; aged between 48 and 73 at recruitment) were used. We also obtained sex-specific full summary statistics data on WBLM, HGS, and usual walking pace from UK Biobank (Neale Lab) in the sex-stratified analysis . For the WBLM, the sample size was 163,815 for the males and 190,993 for the females. For HGS, the sample size was 166,424 for the males and 193,280 for the females. For usual walking pace, the sample size was 165,967 for the males and 193,007 for the females. The GWAS of ALM conducted by Pei et al. contains 205,513 males (the mean age was 57.0 ± 8.1 years) and 244,730 females (the mean age was 56.6 ± 7.9 years) . The original publications obtained the ethics approvals from the relevant authorities, and informed consent was provided by all the participants. 4.3. Metabolome-Wide MR Analysis In the present MR analysis, the following criteria were applied to choose the instrumental variables (IVs) of the plasma metabolites: (i) SNPs associated with a corresponding metabolite at the genome-wide significance level ( p < 5 × 10 −8 ); (ii) due to the complex linkage disequilibrium (LD) structure of SNPs within the human major histocompatibility complex region, SNPs within this region were excluded; (iii) to identify independent IVs for a given metabolite, the clumping process was used (r 2 < 0.001); and (iv) the R 2 and F-statistic were used to measure the strength of the IVs, where R 2 was the variance explained by genetic variance, and an F-statistic of less than 10 was considered indicative of a weak IV . Finally, 690 unique metabolites and 1291 IVs were included in the sequence analysis. presents detailed information on the IVs. The MR analysis utilized the “TwoSampleMR” R package . If the metabolites had only a single IV, we utilized the Wald ratio method to evaluate the changes in ALM, WBLM, HGS, and usual walking pace for a per SD increase in plasma metabolites, as proxied by the IVs. For the metabolites with more than one genetic proxy, we used the inverse-variance weighted (IVW) method, and the heterogeneity of the IVs was estimated by a heterogeneity test based on the Q statistic. Additional analyses, including weighted median, simple mode, MR-Egger, and weighted mode, were also conducted to detect potential horizontal pleiotropy. Bonferroni correction was used for multiple testing and p < 7.25 × 10 −5 (0.05/690) was considered as the significance threshold. Due to the lack of available replication datasets for ALM, WBLM, and usual walking pace, the replication analysis only validated the causal relationship between HGS-related metabolites and HGS weakness. Summary statistics data of HGS weakness were obtained from a previous GWAS meta-analysis, where the definition of HGS weakness was in accordance with the European Working Group on Sarcopenia in Older People (EWGSOP) criteria (<20 kg for females; HGS < 30 kg for males) . A false discovery rate (FDR) < 0.05 was considered as the significance threshold in the replication analysis. 4.4. Colocalization Analysis The Bayesian colocalization analysis was conducted using the “coloc” R package to determinate whether two associated signals (muscle strength- and mass-related metabolites and their corresponding phenotypes) were consistent with a shared causal variant rather than the confounding effect of LD . This analysis included five hypotheses: (H0) no causal variant for either the metabolites or phenotypes in the locus; (H1) one causal variant for a metabolite only; (H2) one causal variant for a muscle strength and mass phenotype only; (H3) two different causal variants for the metabolites and phenotype; and (H4) the metabolite and phenotype shared a common causal variant. For each metabolite, we performed colocalization analysis by including SNPs within a 500 kb window upstream and downstream of the instrumental variables, with a default parameter set at P 12 = 1 × 10 −5 (prior probability that an SNP is associated with both metabolites and sarcopenia-related phenotypes). Due to the sensitivity of colocalization analysis to the prior and window size, we assessed the robustness of the colocalization by using a smaller window size (±250 kb) and prior ( P 12 = 1 × 10 −6 ) in additional analyses . The posterior probability of H4 (PP4) being greater than 0.80 across different windows and priors was considered as strong colocalization evidence . 4.5. Metabolic Pathway Analysis For the potential suitability of the sarcopenia causal metabolites with convincing evidence, metabolic pathway analysis was conducted using the web-based Metaconflict 5.0 to identify underlying metabolite groups or pathways which might be involved in the biological process of sarcopenia . 4.6. Associations Between Modifiable Risk Factors and Sarcopenia-Related Metabolites To investigate the associations between potential modifiable lifestyle factors and sarcopenia-related metabolites with convincing colocalization evidence, we conducted additional univariate MR analyses. These analyses included a total of 38 modifiable risk factors, comprising 12 related to lifestyle behaviors, 24 related to diet, and 2 related to obesity ( ). FDR < 0.05 was defined as the significance level. R version 4.1.1 was used to conduct all the analyses. The study design is outlined in . Initially, we performed a two-sample MR analysis using genetic variants from the recently published metabolomics study as instrumental variables to explore the association between metabolites and sarcopenia-related phenotypes. Subsequently, we conducted colocalization analysis to verify the causal relationships. Metabolic pathway analysis was further conducted to identify the underlying metabolite pathways or groups that might be related to the biological process of sarcopenia. Finally, another MR analysis was conducted between the modifiable risk factors and sarcopenia-related metabolites to identify which metabolites might serve as potential interventional targets of modifiable lifestyle factors. Instrumental variables for the human metabolome were derived from the recently published largest GWAS, which included 690 blood metabolites (571 known and 119 unknown) measured using the Metabolon HD4 platform among 8299 individuals of European ancestry . The mean age was 62.4 ± 9.9 years. As in previous studies, we did not use sarcopenia directly defined by the EWGSOP criteria as an outcome; instead, we analyzed related indicators of sarcopenia . We obtained summary statistics for single-nucleotide polymorphisms (SNPs) associated with sarcopenia-related phenotypes from previous GWASs . In brief, ALM and WBLM were selected as measures of muscle mass; both of these are valid predictors of sarcopenia . HGS and usual walking pace were selected as measures of muscle strength . We used the full summary statistics data from a GWAS on ALM, which contained 450,243 participants of European ancestry, and ALM was measured using bioelectrical impedance analysis (BIA) . For HGS, usual walking pace, and WBLM, full summary statistics data from UK Biobank (n = 461,089, 459,915 and 454,850, respectively; aged between 48 and 73 at recruitment) were used. We also obtained sex-specific full summary statistics data on WBLM, HGS, and usual walking pace from UK Biobank (Neale Lab) in the sex-stratified analysis . For the WBLM, the sample size was 163,815 for the males and 190,993 for the females. For HGS, the sample size was 166,424 for the males and 193,280 for the females. For usual walking pace, the sample size was 165,967 for the males and 193,007 for the females. The GWAS of ALM conducted by Pei et al. contains 205,513 males (the mean age was 57.0 ± 8.1 years) and 244,730 females (the mean age was 56.6 ± 7.9 years) . The original publications obtained the ethics approvals from the relevant authorities, and informed consent was provided by all the participants. In the present MR analysis, the following criteria were applied to choose the instrumental variables (IVs) of the plasma metabolites: (i) SNPs associated with a corresponding metabolite at the genome-wide significance level ( p < 5 × 10 −8 ); (ii) due to the complex linkage disequilibrium (LD) structure of SNPs within the human major histocompatibility complex region, SNPs within this region were excluded; (iii) to identify independent IVs for a given metabolite, the clumping process was used (r 2 < 0.001); and (iv) the R 2 and F-statistic were used to measure the strength of the IVs, where R 2 was the variance explained by genetic variance, and an F-statistic of less than 10 was considered indicative of a weak IV . Finally, 690 unique metabolites and 1291 IVs were included in the sequence analysis. presents detailed information on the IVs. The MR analysis utilized the “TwoSampleMR” R package . If the metabolites had only a single IV, we utilized the Wald ratio method to evaluate the changes in ALM, WBLM, HGS, and usual walking pace for a per SD increase in plasma metabolites, as proxied by the IVs. For the metabolites with more than one genetic proxy, we used the inverse-variance weighted (IVW) method, and the heterogeneity of the IVs was estimated by a heterogeneity test based on the Q statistic. Additional analyses, including weighted median, simple mode, MR-Egger, and weighted mode, were also conducted to detect potential horizontal pleiotropy. Bonferroni correction was used for multiple testing and p < 7.25 × 10 −5 (0.05/690) was considered as the significance threshold. Due to the lack of available replication datasets for ALM, WBLM, and usual walking pace, the replication analysis only validated the causal relationship between HGS-related metabolites and HGS weakness. Summary statistics data of HGS weakness were obtained from a previous GWAS meta-analysis, where the definition of HGS weakness was in accordance with the European Working Group on Sarcopenia in Older People (EWGSOP) criteria (<20 kg for females; HGS < 30 kg for males) . A false discovery rate (FDR) < 0.05 was considered as the significance threshold in the replication analysis. The Bayesian colocalization analysis was conducted using the “coloc” R package to determinate whether two associated signals (muscle strength- and mass-related metabolites and their corresponding phenotypes) were consistent with a shared causal variant rather than the confounding effect of LD . This analysis included five hypotheses: (H0) no causal variant for either the metabolites or phenotypes in the locus; (H1) one causal variant for a metabolite only; (H2) one causal variant for a muscle strength and mass phenotype only; (H3) two different causal variants for the metabolites and phenotype; and (H4) the metabolite and phenotype shared a common causal variant. For each metabolite, we performed colocalization analysis by including SNPs within a 500 kb window upstream and downstream of the instrumental variables, with a default parameter set at P 12 = 1 × 10 −5 (prior probability that an SNP is associated with both metabolites and sarcopenia-related phenotypes). Due to the sensitivity of colocalization analysis to the prior and window size, we assessed the robustness of the colocalization by using a smaller window size (±250 kb) and prior ( P 12 = 1 × 10 −6 ) in additional analyses . The posterior probability of H4 (PP4) being greater than 0.80 across different windows and priors was considered as strong colocalization evidence . For the potential suitability of the sarcopenia causal metabolites with convincing evidence, metabolic pathway analysis was conducted using the web-based Metaconflict 5.0 to identify underlying metabolite groups or pathways which might be involved in the biological process of sarcopenia . To investigate the associations between potential modifiable lifestyle factors and sarcopenia-related metabolites with convincing colocalization evidence, we conducted additional univariate MR analyses. These analyses included a total of 38 modifiable risk factors, comprising 12 related to lifestyle behaviors, 24 related to diet, and 2 related to obesity ( ). FDR < 0.05 was defined as the significance level. R version 4.1.1 was used to conduct all the analyses. In summary, we identified 27 plasma metabolite biomarkers with convincing evidence for sarcopenia-related phenotypes and highlighted 13 metabolites with a cross-sarcopenia effect, and we prioritized the potential intervention metabolite targets of lifestyle changes. Our study offered new clues into the prevention and etiology of sarcopenia. Future studies are needed to validate the findings and investigate whether the identified metabolites could serve as potential intervention targets through lifestyles or drugs to reduce the risk of sarcopenia.
Telemedicine in Cancer Care Beyond the COVID-19 Pandemic: Oncology 2.0?
02c0a935-1488-46d8-88cf-8acfa179bb37
9547555
Internal Medicine[mh]
Despite the availability of hardware and a network capable of supporting telemedicine, prior to the COVID-19 pandemic, telemedicine represented < 1% of care encounters . This pattern was multifactorial, including reimbursement of telemedicine services typically at a lower rate than in-person visits, onerous interstate licensing requirements imposing restrictions on the ability of physicians to provide care for patients living in other states, and long-held assumptions and biases about what medical care should entail . Rather suddenly, in March 2020, in response to the rising threat of COVID-19, the Centers for Medicare and Medicaid Services (CMS) make sweeping changes that temporarily eliminated many of these practical barriers , providing instant parity in reimbursement for audio- and video-based telemedicine visits and enabling physicians to see patients for telemedicine visits across state lines with liberalized requirements for HIPAA compliance and state licensure. These conditions converged to support the testing of the utility of telemedicine, while also concurrently identifying shortcomings for broad and potentially sustained application of telemedicine. One of the most striking findings early in the pandemic was the feasibility of converting to telemedicine-based care. With the closure of in-person clinics in March 2020, there was an immediate and precipitous drop in total visits, accompanied by a concomitant rise in telemedicine-based visits. Princess Margaret Cancer Center in Toronto, Ontario committed to virtual care quickly and was able to offer telemedicine visits within 12 days of the declaration of the pandemic [ •]. Telemedicine comprised 68.4% of visits just weeks after starting, with ambulatory visit volumes back up just a month after deployment. Other health care systems, including both academic centers and community-based networks, reported rapid adoption of telemedicine platforms that were able to restore care at levels that were able to deliver over half of the total visits during the most challenging early weeks of the pandemic before declining to a minority of clinical encounters as the acuity of the pandemic de-escalated by June 2020. Over the first 6 weeks of the pandemic starting in March 2020, the proportion of outpatient visits at Cleveland Clinic that were conducted virtually increased from 2 to 75% (up to 90% for primary care visits). Notably, much of the early telemedicine experience was telephone-based in several of these early reports,[ •, ] as video-based telemedicine required more infrastructure and technical expertise among both providers and patients. Moreover, the proportion of audio to video varied greatly in different settings, whether as a function of the support within the health care system, differing demographics of the patients seeking care within them, or other factors. Overall, however, while the specifics of how telemedicine was deployed in different health care networks varied, the take-home message of the early pandemic is that audio- and video-based care was able to significantly ameliorate if not completely bridge the gap of in-person care introduced by COVID-19. The next critical question was whether the care delivered via telemedicine was an acceptable alternative to the principal stakeholders, specifically patients and the health care practitioners delivering this care, with preliminary reports indicating that both patients and practitioners were largely satisfied with the experience. Specifically, 82% of patients and 72% of practitioners in the Princess Margaret Cancer Center program reported satisfaction [ •]. A video-based program run by Houston Methodist Hospital reported that 92.6% of their patients were satisfied with these visits, among whom 83.4% were very satisfied; the vast majority indicated that they would be highly likely (73.2%) or somewhat likely (17.2%) to pursue another video visit in the future. Notably, more than two-thirds favor having most (36.4%) or at least some (31.2%) of their future visits via televideo, with nearly 1 in 5 (18.8%) favoring having all of their future visits via telemedicine. A Texas-wide community cancer care practice with 640 practitioners at 221 sites of service reported that patients appreciated the option to mitigate risk and desired to maintain the option in the future, while citing that older patients tended to experience greater frustration, which was particularly true with first-time use. One interesting assessment of the patient experience with telemedicine for medical oncology was performed as semi-structured interviews of 20 patients at Thomas Jefferson University . Though such a small sample is clearly of limited utility, the findings highlight the variability of the patient experience, with some interviewed patients cited the favorable experience of longer and more thorough visits, while others perceived that televideo visits are less thorough and more rushed. Some cited their comfort, at least after the initial video-based encounter, while others felt that there was less of a personal connection compared to the in-person clinical experience. Turning to the physician experiences, a survey of 1038 oncologists from within the National Comprehensive Cancer Network (NCCN) found that 93% reported no perceived adverse outcomes from telemedicine [ ••]. Respondents had differing views of the value of telephone- vs. video-based telemedicine after the pandemic, with 54% reporting feeling favorably about phone visits, compared with 87% feeling favorably or very favorably about video-based encounters beyond the pandemic. Practitioners within the aforementioned large practice setting in Texas found that 76% were satisfied with a current telehealth platform, including 21% who were very satisfied with it . Among cancer care professionals within the community-based Kaiser Permanente Northern California Network that included medical oncologists, radiation oncologists, breast surgeons cancer navigators, and survivorship clinicians, 76% were either very satisfied or somewhat satisfied with telehealth, and 82% would like for it to be maintained or even increase after the pandemic is over. This favorable impression notwithstanding, the broader view of telemedicine in oncology is that it is far better suited for some scenarios and patients than for others. The survey of NCCN oncologists revealed that they estimate that 46% of their visits would be well served by telemedicine, varying based on the nature of the encounter [ ••]. Dividing cancer care into scenarios ranging from establishing a personal connection with the patient and family to making decisions about cancer-related procedures to reviewing benign findings or reassuring data, oncologists generated a spectrum of settings in which in-person care was felt to be far superior on one side, while telephone and/or video-based interactions were felt to be comparable or even superior on the other side; in general, video was felt to be more suitable than telephone-based communication for complex interactions. Cancer care professionals in the Kaiser-Permanente Northern California Network also noted variability in the appropriateness of telemedicine for the specific setting: specifically, 49% felt only an in-person visit is acceptable for end-of-life discussions, and 35% felt only an in-person visit is appropriate for the review of a new diagnosis . Overall, oncology lies in the middle of a broader universe of medical care in which certain specialties such as psychiatry and endocrinology have proven to be very well suited to telemedicine, while fields such as orthopedics and ophthalmology have not been as amenable to meaningful adoption of telemedicine. [ •] One instructive account was provided by the palliative care clinic at the Dana-Farber Cancer Institute (DFCI) in Boston, MA . The authors highlighted that not only was it possible for telemedicine to bring their total number of visits back up to a near baseline total number of visits, but they were able to share effective interpersonal connections with their patients despite the understandably emotionally charged nature of this work. With training of their faculty in how to interact most effectively via video-based encounters, the DFCI palliative care team observed that “patients seemed relatively comfortable to discuss goals of care by phone or over video,” noting that they experienced a sharp increase in the number of conversations they completed about goals of care in April 2020 and that “patients often initiated conversations about their goals and preferences.” While this was likely in large part because the pandemic introduced a sense of urgency, it highlights that it is possible to utilize telemedicine platforms even for aspects of oncology care that one might consider to be uniquely suited for in-person visits. Along with the inherent lack of ability to examine a patient, the presumed or experienced lack of comparable interpersonal connection in a telemedicine-based encounter has been cited as a leading limitation in structured interviews of both patients [ , , ] and physicians [ , ••, ]. In the evaluation of the experience of cancer care professionals in Kaiser-Permanente Northern California , the leading challenges were reported to be internet connection (84%) and equipment problems (72%), followed by the lack of ability to pursue a physical exam after that (60%). Not surprisingly, in-person visits were thought to foster a strong patient-clinician connection by 99%, compared to a lower proportion of 77% feeling that video-based visits foster a strong patient-clinician connection. To some patients and physicians, independent of the need for it during a clinic visit, the exam and potential for direct contact in communicating are central to the patient-physician relationship. Telemedicine holds the promise of eliminating geographical disparities in which patients in underserved areas have limited or no access to specialized practitioners or entire fields. However, we have also seen that despite the many successes telemedicine has enabled, it can introduce or magnify other disparities. Supporting the conclusion from many reports that older patients as a clear subpopulation with challenges adopting telemedicine [ , , , •], a cross-sectional study of 4525 community-based adults age 65 or older evaluated subjects for problems with hearing, speaking, dementia, vision, lack of internet-enabled hardware, and lack of use of electronic communications in the preceding months, with “telemedicine unreadiness” defined as a patient being limited by any of these factors [ •]. This analysis revealed that 25% of people 65–74, 44% of those 75–84, and 72% of those people 85 and older were “telemedicine unready,” and that this was also more common in unmarried, less educated, lower income, and less healthy patients. Studies on disparities have converged to illustrate a consistent theme that many of the patients who have the greatest need for telemedicine support are also those least likely to avail themselves of it. In studies of patients with cancer as well as other medical settings, patients with lower educational levels and socioeconomic status are those most likely to have not pursued telemedicine and/or report lower levels of comfort with this strategy . Some of these barriers are more readily addressed than others. Notably, telephone-based telemedicine has been favored for some of these patients for whom technological skills and/or equipment are in the shortest supply ; importantly, however, telephone-based encounters are perceived as more limited in the range of care that can be delivered effectively [ ••], and reimbursement for these encounters falls well below that of video-based and in-person clinical care . This establishes a problematic precedent in which practices and physicians providing care for more disadvantaged, sicker, and older patients—those who are prone to have the greatest difficulty accessing better reimbursed video-based telemedicine—suffer a financial disincentive to offer telemedicine in the audio-based format that is accessible to a broader range of their patients. Physicians also vary in their receptivity to telemedicine for a wide range of reasons. In some surveys, physicians have expressed unease with telemedicine and a lack of infrastructural support, including equipment like a webcam or dedicated location to conduct telemedicine visits with needed privacy, lighting, and fast internet connectivity . Importantly, some assessments of telemedicine workflows have noted that while physicians routinely benefit from an array of schedulers, medical assistants, and nurses to facilitate their live clinic, it has been very common for physicians to be left to conduct telemedicine visits with no support staff at all, potentially requiring physicians to dedicate initial time in the visit to troubleshooting for themselves and/or the patient how to successfully connect online. Facing these headwinds, it is understandable that many physicians consider the challenges of telemedicine to be an unappealing alternative to a more efficient and far better-supported option of managing patients in person in the clinic, even when a telemedicine-based visit would be appropriate. In the months and years since the immediate and urgent adoption of telemedicine in the early weeks of the COVID-19 pandemic, we have seen a pattern of attenuated use that nevertheless remains far above the level of telemedicine use that preceded the pandemic. Looking broadly at medical practice and not specifically at oncology, claims data illustrate that encounters rose from 1% pre-pandemic to 49% of encounters in April 2020 before settling at an average of 21.5% of encounters for all jurisdictions by October through December 2020; this varied greatly across geographies, from 10.1% in Mississippi to 49.9% in Massachusetts and 53.4% in Puerto Rico . Also notable from these claims data is the fact that in-state claims represent the overwhelming majority of telemedicine during a time when state licensure restrictions were greatly relaxed (93.5% overall, from all jurisdictions). This may argue that current practice is not hobbled by licensing restrictions, though we also interpret that historical restrictions may contribute to a pattern of a low proportion of telehealth claims from out-of-state providers. Looking specifically at more recent data and findings from an oncology-specific network (see Fig. ) telemedicine use has ebbed and flowed with the varying level of threat of COVID-19, representing a steady minority of approximately 17–20% of encounters after the spring of 2021. The factors contributing to whether a case is likely to be preferentially favored for in-person or telemedicine-based care arise from many dimensions, including those specific to the patient, the disease setting, the physician, and institution, as well as societal issues (see Table ). In May 2021, the American Society for Clinical Oncology (ASCO) published a position that articulated recommendations about the potential longitudinal role of telemedicine in oncology practice . This statement supports continuing CMS provisions for telemedicine in cancer care beyond the duration of pandemic, indicating that the benefits of telemedicine should not be limited to a time-sensitive extenuating circumstance. In order to address constraints posed by restrictions on state licensure limiting the ability to provide care for patients across state lines, ASCO offers strong support for all states joining the Interstate Medical Licensure Compact (IMLC) to provide a mechanism for licensed physicians to more readily obtain medical licenses for other states. ASCO maintains a position that the doctor-patient relationship be initiated by an in-person visit before pursuing telemedicine-based care, with the exception that this is not intended to restrict a patient from pursuing a telemedicine-based second opinion. The ASCO position statement also proposes that current medical liability insurance policies should be expected to cover telemedicine interactions. Addressing a concern that telemedicine could become a platform for increased fraud, the stated position from ASCO is that the Federal Trade Commission has the purview to monitor telehealth practice patterns to prevent unfair practices and fraud so that this concern should not threaten longitudinal adoption of telemedicine for cancer care. One practical application of a variant of telemedicine has been Project ECHO (Extension for Community Health Outcomes), originating in Albuquerque as a program originating in oversight of the management of hepatitis C , has grown into platform employed by the Center for Global Health as a telementoring platform for a wide range of cancer control programs around the world, including cervical cancer screening and prevention, survivorship, palliative care, and other initiatives. As part of Project ECHO, MD Anderson Cancer Center has become a “superhub” for a range of programs providing specialist oversight for underserved rural settings within the US as well as international oncology support programs for international education and patient care, coordinated with local practitioners. A variant of telemedicine that has also been adopted over this period is remote consultations for complex cancer cases offered as an employee benefit. West and colleagues described their experience of offering asynchronous reviews of case records by subspecialist experts, offering written reports summarizing optimal current and future management options with an intent that most management plans are executed by the primary medical team close to a patient’s home, with the initial description focusing on 110 patients with lung cancer reviewed in the first 19 months of the program . While not representing telemedicine by a typical definition, this platform for remote input from a subspecialist demonstrated rapid growth in case of volumes despite the constraints of the pandemic, offered evidence-based changes or recommendations to improve clinical outcomes in over 90% of cases, and concurrently recommended cost-reducing measures by eliminating low-value interventions that translated to an average projected cost-reduction of just over $19,000 per patient. This program, which interfaces with the local medical team for the patient and is therefore not delivering telemedicine directly, is growing to now include a multi-institutional network of National Cancer Institute (NCI) designated Comprehensive Cancer Centers offering remote education and support in a novel model of cancer care delivery. Given the clear signal that telemedicine is feasible in oncology, the looming question remains of whether and how it will be incorporated as a sustained practice beyond the pandemic. As noted above, the available data illustrate that telemedicine remains far more utilized than it was prior to the spring of 2020; nevertheless, it remains relegated to a small minority component of broader cancer care. Such encounters likely represent the subgroup of patients who are best suited for telemedicine—patients who are clinically most stable, likely not on infusional therapies, comfortable with technology, and without hardware or network barriers. With the arguable exception of a few unconventional models described above that are not directly reimbursed by federal programs or conventional health care insurance, we have yet to see the implementation of innovative new concepts that could transform cancer care by eliminating the geographic limitation of medical oversight by practitioners within convenient driving distance of a patient’s home. There is still the unrealized promise of delivery models that leverage telemedicine as a means of integrating more subspecialized cancer care as oncology becomes increasingly complex. For example, networks that include a large number of general oncologists, as well as a subgroup of more cancer-specific subspecialists could offer patients a treatment plan developed with a subspecialist they see via a telemedicine visit one or a few times per year, then executed by a general oncologist from the same network close to home. Renowned cancer centers have yet to offer services of remote synchronous telemedicine consultations with patients outside of their region, out of state, who might be candidates for clinical trial options there, in large part related to the still cumbersome nature of inter-state license restrictions. And while the pandemic led to a dramatic drop in clinical trial participation in oncology [ – ], the hyper-regulated world of clinical research has yet to adapt on a meaningful scale by permitting patients to pursue trial-required visits to be replaced by telemedicine, thereby restricting trial access to patients within accessible geography. This is despite a “call to arms” to incorporate telemedicine-based medical visits, remote monitoring, and remote laboratory tests as a means of improving the racial and demographic diversity of clinical trials by reducing required time away from work and family commitments that likely precludes many from traveling to clinical trial sites. . [ , , ] While many barriers to the broad adoption of telemedicine remain, a subset may be ameliorated with deliberate efforts. For those patients limited by a lack of technical skills, needed hardware, or high-speed internet access, interventions that can reduce these barriers include improved broadband through planned government infrastructure, programs of loans, or donations of inexpensive internet-enabled smartphones or tablets, potentially with a ready-made icon that can lead directly to a telemedicine encounter, and implementation of a network of local telemedicine-ready booths, ideally with support on-site, in communities where patients live. For physicians and institutions, there is a need for training in “webside manner,” provision of needed hardware and bandwidth, and a setting with privacy and fast internet to maximize the patient experience. To minimize friction on both sides of the telemedicine interaction, telemedicine interactions would benefit from the dedicated support staff required to prepare patients with guidance and training to get to the telemedicine platform, to give both patients and physicians real-time technical support for connectivity and hardware issues, and to provide physicians with the team of schedulers, nurses, and other employees that can facilitate efficient use of time in a virtual encounter that better mimics the range of support staff routinely enabling efficient use of physician time in an in-person clinic. Unfortunately, these potential remedies for current challenges are likely to remain unrealized as long as uncertainties loom around telemedicine reimbursement, licensure, and liability concerns, particularly when healthcare institutions are slow to break from the status quo. While telemedicine removes the bottleneck of limited exam rooms, office space, and computer terminals in clinics, it also eliminates facility fees and other ancillary charges that may be welcomed by institutions. Reimbursement tied to in-person visits, combined with an undefined future regulatory environment that includes the threat or probability of a lack of parity between in-person and telemedicine-based encounters, cultivates a disincentive for institutions to restructure to replace live visits with telemedicine-based care. Recognizing that the US-based health care system practices “reimbursement-based medicine” that incentivizes maximized reimbursement per unit of provider time and institutional resources, any change that reduces anticipated revenue for the same clinical in a given time population is unlikely. Interstate licensing barriers can be addressed through the momentum of the growing IMLC, but this improvement still incurs high costs for each state license, rather than a more practical plan of reciprocal interstate recognition of existing licensure that faces considerable if not insurmountable political hurdles. Physicians also cite a concern about missing important findings, particularly without a physical exam with telemedicine [ , – , ••, ], on top of a cultural belief among a subset of physicians and patients that the physician–patient relationship is defined in part by direct examination and “laying on hands.” In this environment, telemedicine provides a compelling option for institutions only to the extent that it can increase the patient population receiving care and/or creates efficiencies and cost savings from a reduced physical infrastructural requirement. With ongoing uncertainties about reimbursement parity, medical licensing barriers, and malpractice insurance coverage, we must anticipate that telemedicine in oncology is likely to remain an option favored for a minority of patients and clinical settings that is offered alongside traditional, in-person care for the majority of encounters. Prior to the COVID-19 pandemic, telemedicine was utilized in only a rare minority of oncology encounters, but the pandemic both created an urgent need for an alternative approach to delivering medical care and profoundly, reduced barriers in a new regulatory environment that facilitated telemedicine, albeit temporarily. These conditions provided the testing ground that provided a clear proof of principle that telemedicine for cancer care is feasible and is associated with acceptable, even promising levels of satisfaction among both patient users and physicians. Over the subsequent two years, as concerns about infectious risk have waxed and waned while the regulatory conditions supporting telemedicine have begun to recede, telemedicine has continued on a far higher level than previously, but this still represents only a limited subset of cancer care encounters. The broadened use of telemedicine has demonstrated that its practice is not ideally suited for all patients and/or clinical scenarios, for a wide range of reasons. While some challenges may be addressed to ameliorate these limitations over time, the systemic limitations of reimbursement and inter-state licensing remain the greatest looming threat to telemedicine substantially redefining how cancer care is delivered. In the meantime, specific adaptations, such as asynchronous case reviews and global oncology programs for underserved populations, provide glimpses of new models that may thrive by not relying on traditional payment structures for medical care. Transformative delivery of cancer care that employs telemedicine to overcome geographic limitations awaits an overhaul of entrenched regulatory hurdles that we can expect will remain elusive in the coming years.
Teaching heuristics and mnemonics to improve generation of differential diagnoses
647f4bca-e14d-49ac-a4f5-cd598e371949
7144285
Family Medicine[mh]
Differential diagnosis (DDx), the cognitive process of producing and prioritizing a list of potential diagnoses for a given clinical presentation, is one of the most important and difficult skills a medical student (or indeed, a clinician at any level of experience) must develop. The student begins to learn the rudiments of forming differentials as early as the first year of medical school, and proceeds in earnest to further advance those skills with actual patients ‘on the wards’ – but in truth, DDx is a skill set that requires a lifetime to master. Despite the centrality of DDx in a physician’s clinical ‘toolbox,’ the available literature gives little guidance as to the most effective way to teach DDx to medical students. Similarly, while it may be that some US medical schools include DDx as a discrete subject, few have published reports describing such curricula – suggesting that at least some schools still prefer the traditional, more indirect and inferential approach of case studies and hands-on clinical experience . This may, in part, be the result of the limited studies that describe or propose overt DDx teaching methodologies, many of which have produced disappointing results. Courses that simply teach abstract clinical reasoning skills, or Bayesian competing-hypothesis approaches have not been shown to enhance students’ diagnostic skill and perhaps this is not surprising, given that experienced physicians do not appear to undertake the process of diagnosis in this overtly analytical manner [ , , ]. What is needed, then, is a more grounded, pragmatic approach to DDx that reflects what working physicians are actually doing – or should be doing – on a day-to-day basis. Clinicians generally employ a mixture of analytic and nonanalytic approaches to the diagnostic process [ – ], and crucially, there is a metacognitive component that is essential but frequently overlooked [ , , ]; in other words, awareness of and access to one’s own knowledge about the subject at hand is, arguably, as important as the knowledge itself. Metacognition encompasses a broad range of processes involved in the control and monitoring of thinking and learning [ , – ]. Pursuant to that, we have developed a model for conceptualizing and teaching DDx that is streamlined and practical, while accounting for the key role of metacognition, and in particular, of metamemory , the conscious methods and mechanisms for conducting memory operations. The model separates the process of DDx into three sequential (though necessarily overlapping) components: Generation – populating a large, inclusive list of possible diagnoses Filtration – removing diagnoses that do not fit the clinical dataset Ordering – ranking the diagnoses (in terms of both likelihood and risk to the patient) This bears some similarity to, but is considerably simpler than other proposed DDx models, such as the Identify-Frame-Organize-Limit-Explore-Rank-Test paradigm described by Stern et al. . In our experience, and as noted elsewhere [ – ], generation seems to be a ‘rate-limiting step’ for most medical students. And clearly, a failure to generate a robust candidate list of diagnoses will render moot the processes of filtration and ordering. Though there is considerable literature addressing the heuristics and biases that result in diagnostic error [ , , , , ], the generative process might be considered anterior to most of these, since a diagnostician cannot entertain, let alone dismiss or overlook a potential diagnosis that hasn’t even been raised as a possibility . Therefore, our primary focus in this study is to understand what may influence and enhance a student’s capacity for DDx generation. The generation problem may not be primarily one of knowledge deficit. By the third year of medical school, the average student has acquired an impressive fund of clinical knowledge. But difficulties with DDx may stem in greater part from metacognitive limitations, such that the student struggles with integration, organization, and recall of information [ , , ]. One straightforward and time-tested approach to this problem is the use of what can broadly be called metamemory techniques (MMTs) – mnemonic devices and heuristics (‘mental shortcuts’) that serve to remind students and clinicians what they may already know in another context. Students and educators often think of MMTs as gimmicks or ‘tricks,’ but such devices have been shown to enhance academic performance in a variety of settings [ – ]; unfortunately, little is known about the efficacy of MMTs in medical learning, recall, and reasoning, particularly as applied to student’s development of DDx skills. DDx ‘mnemonics’ (an informal term generally used to refer to a subset of MMTs that includes acrostic and phrase expression devices) may be general-purpose or specific to particular clinical situations, and are deployed widely at all levels of training in medicals schools and residency programs [ , , ], but there are essentially no studies validating their use or commending their transmission to learners. Moreover, it is unclear as to which MMTs are the most effective in helping students to generate differentials. In this study, our hypotheses are driven by several related questions: Will the explicit teaching of appropriate MMTs to third-year medical students result in measurable improvements in the size of their differential diagnoses? Which MMTs are the most effective? Which are least effective? Which MMTs do students subjectively find most helpful? Do their impressions correlate with objective assessments of MMT efficacy? Participants Third-year Family Medicine (FM) clerkship students (MS3s) at the Wright State University Boonshoft School of Medicine were evaluated over one full school year (N = 114), in eight groups (6-week rotations) of 12–20 students. MS3s were anonymized using a unique identifier consisting of a self-assigned random word + random number, which they were to use on all documents throughout the course of the study. Study design Four DDx MMTs were identified and selected for study, based on their broad applicability, apparent common usage in clinical settings, and the teaching experience of the authors, as well as on the relevant literature [ , , , , ]. These are enumerated in . MS3s, as part of their FM didactics curriculum, participated in a 90-min experimental ‘self-examining classroom’ workshop designed to teach the use of DDx MMTs according to the following protocol: MMTs were presented in a fixed order, as listed in . For each MMT: Students were given a clinical case, consisting of a brief history and exam for a fictional patient, and asked to produce an unsorted differential diagnosis within a 3-min time limit. These ‘pre-cases’ (prior to instruction in use of the MMT) were returned to the course proctor. Students were then trained in the use of the MMT, which was simply identified as a DDx ‘trick.’ Training generally included a brief demonstration case and a discussion of the practical application of the MMT. Students then completed and returned a 3 min ‘post-case.’ A crossover control design was employed, such that for each MMT, there were two assay cases: a Case A and B. At the beginning of the session, the class was evenly divided into left and right sides. For the pre-case, the left completed Case A and the right completed case B. For the post-case, the left side completed Case B, and the right completed case A. This approach made it possible to measure the pre/post change in a specific case, without the biases introduced by students working the same case more than once. The same case-pairs were used for each MMT throughout the study, always in the order enumerated in . Differential sizes were determined in two ways. Unscreened differential sizes were assessed as a simple tally of the number of diagnoses a student listed for a given case. For screened differentials, the raw lists were parsed by medical readers (AC, KA, FL), and inapplicable diagnoses were removed. A mandatory post-experience questionnaire was also provided to students (N = 56), beginning with clerkship Rotation 3. This questionnaire included Likert-scale questions related to the course and to pre-course experiences with DDx, and also asked students to rank the 4 MMTs in terms of perceived effectiveness. A free-text optional feedback section was also included. Questionnaires were marked with the anonymous identifiers selected at the start of the course. Analysis Case data were collected in a Microsoft Excel spreadsheet and indexed by anonymous identifier. Differentials for all pre-post cases were scored by fourth-year medical students (AC, KA) for gross size (unscreened), and size of clinically plausible (screened) differentials. Questionnaire data were collected and indexed by anonymous identifier in a separate spreadsheet. Data were analyzed using SAS version 9.4 (Cary, NC). Descriptive Statistics were conducted with means, standard deviations, and 95% confidence intervals (CI) for all continuous variables. Analysis of Variance (ANOVA) was used to assess differences in unscreened and screened scores. Post-hoc tests were conducted using Tukey’s method. To assess the impact of the differential diagnoses interventions, two linear mixed-effects model (screened and unscreened) was conducted. The models included a group variable (i.e., Case A first or Case B first), iteration [ – ], clerkship rotation, a group-by-iteration interaction, and a rotation-by-iteration interaction. To examine within and between group differences, Tukey’s post hoc tests were conducted to adjust for multiple comparisons. All p-values <.05 were regarded as statistically significant. Graphs were produced using Excel or GraphPad Prism (San Diego, CA). Third-year Family Medicine (FM) clerkship students (MS3s) at the Wright State University Boonshoft School of Medicine were evaluated over one full school year (N = 114), in eight groups (6-week rotations) of 12–20 students. MS3s were anonymized using a unique identifier consisting of a self-assigned random word + random number, which they were to use on all documents throughout the course of the study. Four DDx MMTs were identified and selected for study, based on their broad applicability, apparent common usage in clinical settings, and the teaching experience of the authors, as well as on the relevant literature [ , , , , ]. These are enumerated in . MS3s, as part of their FM didactics curriculum, participated in a 90-min experimental ‘self-examining classroom’ workshop designed to teach the use of DDx MMTs according to the following protocol: MMTs were presented in a fixed order, as listed in . For each MMT: Students were given a clinical case, consisting of a brief history and exam for a fictional patient, and asked to produce an unsorted differential diagnosis within a 3-min time limit. These ‘pre-cases’ (prior to instruction in use of the MMT) were returned to the course proctor. Students were then trained in the use of the MMT, which was simply identified as a DDx ‘trick.’ Training generally included a brief demonstration case and a discussion of the practical application of the MMT. Students then completed and returned a 3 min ‘post-case.’ A crossover control design was employed, such that for each MMT, there were two assay cases: a Case A and B. At the beginning of the session, the class was evenly divided into left and right sides. For the pre-case, the left completed Case A and the right completed case B. For the post-case, the left side completed Case B, and the right completed case A. This approach made it possible to measure the pre/post change in a specific case, without the biases introduced by students working the same case more than once. The same case-pairs were used for each MMT throughout the study, always in the order enumerated in . Differential sizes were determined in two ways. Unscreened differential sizes were assessed as a simple tally of the number of diagnoses a student listed for a given case. For screened differentials, the raw lists were parsed by medical readers (AC, KA, FL), and inapplicable diagnoses were removed. A mandatory post-experience questionnaire was also provided to students (N = 56), beginning with clerkship Rotation 3. This questionnaire included Likert-scale questions related to the course and to pre-course experiences with DDx, and also asked students to rank the 4 MMTs in terms of perceived effectiveness. A free-text optional feedback section was also included. Questionnaires were marked with the anonymous identifiers selected at the start of the course. Case data were collected in a Microsoft Excel spreadsheet and indexed by anonymous identifier. Differentials for all pre-post cases were scored by fourth-year medical students (AC, KA) for gross size (unscreened), and size of clinically plausible (screened) differentials. Questionnaire data were collected and indexed by anonymous identifier in a separate spreadsheet. Data were analyzed using SAS version 9.4 (Cary, NC). Descriptive Statistics were conducted with means, standard deviations, and 95% confidence intervals (CI) for all continuous variables. Analysis of Variance (ANOVA) was used to assess differences in unscreened and screened scores. Post-hoc tests were conducted using Tukey’s method. To assess the impact of the differential diagnoses interventions, two linear mixed-effects model (screened and unscreened) was conducted. The models included a group variable (i.e., Case A first or Case B first), iteration [ – ], clerkship rotation, a group-by-iteration interaction, and a rotation-by-iteration interaction. To examine within and between group differences, Tukey’s post hoc tests were conducted to adjust for multiple comparisons. All p-values <.05 were regarded as statistically significant. Graphs were produced using Excel or GraphPad Prism (San Diego, CA). A total of 114 Family Medicine clerkship students, divided into 8 sequential 6-week rotations of 12–20 students, participated in the study. Approximate half (n = 56) received the cases in the pre-post intervention order A-B, with the remainder receiving them in order B-A. Over the entire study, the mean unscreened and screened DDx scores (size of differentials) for the groups combined were 8.14 (95% CI = 7.94–8.35) and 6.69 (95% CI = 6.52–6.87), respectively. Cumulative and individual effect of MMTs These results are presented in and . For the A-B, B-A, and combined groups, the cumulative scores increased by 29.5% (p = .03), 36.4% (p = .0002), and 33.1% (p < .0001), respectively. Of the four MMTs tested, only the Mental CT Scan was found to independently increase DDx scores. This trend was observed in both test groups, and the effect was statistically significant for the combined group (A-B + B-A) of students. Results for cumulative scores and the Mental CT Scan were similar for screened and unscreened groups. Post-experience survey The quantitative results are summarized in . In general, students rated the experience as worthwhile, practically useful, and enjoyable. Students did not rate their pre-course DDx skills or awareness of the tested MMTs highly. There was strong agreement that DDx was a key skill to be acquired, and agreement that a dedicated DDx course should be part of the medical school curriculum. In terms of perceived usefulness, students in the survey expressed their preferences for the MMTs in the order Mental CT Scan ≫ Constellations > Bundling > VINDICATES with more than 50% preferring Mental CT Scan ( ). Free-texted student comments were also collected, and these reflected very positive perceptions of the content and format of the course overall. Critical feedback included a number of recommendations to: a) grant students more time for case completion, b) include opportunities for open discussion of completed cases, and c) shorten the duration of the course. These results are presented in and . For the A-B, B-A, and combined groups, the cumulative scores increased by 29.5% (p = .03), 36.4% (p = .0002), and 33.1% (p < .0001), respectively. Of the four MMTs tested, only the Mental CT Scan was found to independently increase DDx scores. This trend was observed in both test groups, and the effect was statistically significant for the combined group (A-B + B-A) of students. Results for cumulative scores and the Mental CT Scan were similar for screened and unscreened groups. The quantitative results are summarized in . In general, students rated the experience as worthwhile, practically useful, and enjoyable. Students did not rate their pre-course DDx skills or awareness of the tested MMTs highly. There was strong agreement that DDx was a key skill to be acquired, and agreement that a dedicated DDx course should be part of the medical school curriculum. In terms of perceived usefulness, students in the survey expressed their preferences for the MMTs in the order Mental CT Scan ≫ Constellations > Bundling > VINDICATES with more than 50% preferring Mental CT Scan ( ). Free-texted student comments were also collected, and these reflected very positive perceptions of the content and format of the course overall. Critical feedback included a number of recommendations to: a) grant students more time for case completion, b) include opportunities for open discussion of completed cases, and c) shorten the duration of the course. In this study, we have demonstrated that the teaching of specific MMTs to 3rd-year Family Medicine clerkship students resulted in small but significant improvements in their capacity to generate differential diagnoses. Our specific findings may be summarized as follows: For the total study population, a small but significant increase in the size of both screened and unscreened differentials was observed from the beginning to the end of the course, in a manner suggesting an additive effect contributed by each MMT. The MMT referred to as the Mental CT Scan was the only one of the tested techniques that produced an independent, statistically significant positive effect on differential size. Students, consonant with their performance on the test cases, rated the Mental CT Scan as the most useful MMT of the four techniques presented. Students rated the DDx course highly in terms of perceived importance, usefulness, and enjoyability, and made valuable recommendations for course improvement. DDx is one of the essential cognitive skills a clinician must master. A failure to generate adequate and appropriate differentials is a likely root cause of diagnostic error – and such errors frequently put patients at risk. Singh et al. have estimated that, in the US outpatient population alone, perhaps 12 million patients a year are exposed to errors in diagnosis, half of which result in measurable harm. The problem is as serious as it is complex, and it is reasonable to suggest that the solution begins with finding better ways to teach diagnostic skills to students and residents . Yet there is little consensus as to the best way to accomplish this. With few exceptions, textbooks of differential diagnosis do not generally articulate a systematic approach to the DDx process , and the material therein is often presented without explicit evidence or citations [ , , ]. It may be that some medical schools are teaching DDx using a structured, evidence-based approach, but there is very limited support for this in the literature – suggesting that many schools still leave the subject to individual precepting and mentoring in the context of case discussions. In this setting, educators may employ a variety of approaches, heuristics and mnemonic devices for generating differentials – likely including one or more of the MMTs evaluated in this study. In the main, these traditional approaches are applied without explicit validation or rigorous supporting evidence. It is noteworthy, for example, that the VINDICATE(S) acrostic, also formulated as VITAMINCDE or even VITAMINSABCDEK [ , , ], is one of the most commonly taught and commonly used MMTs for generating differentials by pathophysiology. Yet within the limitations of our study, it was shown to be one of the weakest and least popular of the techniques tested. In contrast, the Mental CT Scan was found to be both the most efficacious and the best-liked of the MMTs – at least as applied to cases involving anatomic diagnoses. Although forms of this visualization technique have been described sporadically in the literature [ , , ], our adaptation of this anatomic visualization device – in which we ask students to mentally ‘scan’ a patient from front to back in the coronal projection, identifying each major tissue plane and compartment along the way – may be particularly attractive and useful to students as well as working clinicians. In any event, our findings suggest that this MMT should be given consideration as an explicitly taught technique for generating anatomic differentials. The two remaining MMTs – Constellations and Bundling – were found in the study to be of intermediate popularity and equivalent (and possibly superior) to VINDICATES in terms of efficacy. These MMTs are interesting in the sense that they are not classic mnemonic ‘tricks’ for merely promoting recall, but instead are heuristics that stimulate pattern recognition – one of the key elements of diagnostic reasoning described in the literature [ , , ]. The Bundling technique is a straightforward metacognitive cue to remind the student or clinician that – precisely as described in the DDx course – ‘diagnoses travel in packs.’ Thus, for a course case involving vertigo, the student might recognize that a differential including labyrinthitis also tends to include (at least initially) such entities as benign paroxysmal positional vertigo, Menière’s Disease, and vertebrobasilar insufficiency. In the study, students were not prepared with specific ‘bundles’ – they were simply taught the concept of diagnostic bundling, given a few examples, and provided with tips for recognizing bundles. Obviously, such an approach will be limited by the student’s knowledge base, which might account for its modest performance in this study. It may well be, however, that Bundling becomes more useful as a function of the clinician’s level of experience. The Constellations technique resembles the method described by Sacher and Detsky , in that it bases pattern recognition on relatively small clusters of clinical information selected for high discriminating power. These need not be near-pathognomonic pivots , as described by Eddy and others [ , , ], but should be findings that help to confine the diagnosis (e.g., in a case of abdominal pain, hematochezia will have more discriminating power than nausea). Our approach to this MMT appears to be novel, however, in the deliberate and explicit use of many different clusters of case data to produce sub-differentials, which can then be summed to generate one large ‘superset’ differential. As with Bundling , this MMT is likely to be most useful in experienced hands; nevertheless, it appeared to outperform all but the Mental CT Scan in terms of both efficacy and popularity. As such, it may represent a teachable form of pattern recognition, one that is more learner-friendly precisely because it offers the diagnostician multiple patterns to recognize. Our study was constrained by certain limitations. In a yearlong protocol involving 114 students, the study was not sufficiently powered to detect small individual MMT effects; it is certainly possible that the benefit of individual MMTs, other than the Mental CT Scan , would achieve statistical significance in a larger study. Another significant limitation with respect to the assessment of individual MMTs is that each MMT was, in effect, assessed by a single pair of cases. Thus the per-MMT findings, while novel and intriguing, must be regarded as preliminary, pending larger studies focused on specific MMTs. We were also concerned about potential variability within the A-B case pairs for each MMT. We believe our A-B/B-A crossover control design significantly diminished the likelihood that a given MMT’s effects were the result of inconsistencies in case difficulty. Nevertheless, our linear mixed-model analysis demonstrated small differences in cumulative performance based on the order (A-B vs. B-A) of case completion. Our study design was such that all cases were completed in strictly timed 3-min windows, which introduced another potential limitation, i.e., that some of the MMTs might be less effective in such a time-limited setting. Some students commented, for example, that VINDICATES might be more useful if more time were allotted for its use. Perhaps this accounts for that MMT’s flat performance in the study, and this is a question that might be fruitfully addressed in future work. It might be argued, however, that a 3-min limit approximates a reasonable span of time for an experienced clinician, or even a well-prepared medical student, to generate an initial differential diagnosis, especially given the kind of time constraints under which physicians are increasingly expected to work. It is just these kinds of real-world considerations that make methods such as diagnostic checklists or 2-D diagnostic grids difficult to implement. Also of interest is the question of whether the performance-enhancing effect of the DDx course and its MMTs is durable and persistent. The protocol described herein was not designed to address this question, but we hope to explore the matter in a future study. Informal feedback from fourth-year students and graduates has provided some cautious encouragement for the hypothesis that a formal, MMT-focused DDx course can help students to become better lifelong diagnosticians. Within the context of our 3-phase DDx model (generation-filtration-ordering), this study was designed to assess techniques that operate primarily on the generation phase; that is, it is concerned with optimizing sensitivity to diagnostic possibilities. Whether this alone will support clinicians in making better final diagnoses is an open question; generation may be necessary for diagnosis, but it is not sufficient. Still, it may be considered a sine qua non , as well as a ‘bottleneck’ for many diagnosticians, particularly students – and the ‘failure to consider the correct diagnosis as a possibility’ has been cited as the most common cause of diagnostic error. Moreover, it may be argued that the students’ use of the MMTs also promoted a degree of appropriate filtration, as significant improvements in differential size were generally observed in both screened and unscreened differentials throughout the study. This suggests that MMTs can increase diagnostic sensitivity without sacrificing specificity. In our ongoing studies, we are evaluating heuristics that more directly influence filtration (i.e., optimization of specificity) and ordering (i.e., optimized diagnostic priorities). This study is, to our knowledge, the first to describe a conceptual, MMT-focused DDx course for medical students, and in this context, to develop preliminary data regarding comparative efficacy of commonly used, general-purpose MMTs for generating differential diagnoses. Our findings support the following guidance to medical educators who are motivated to accelerate the development of diagnostic skills in their students: A formal, metacognition-focused DDx course, such as described herein, can enhance differential diagnosis generation skills beyond the traditional, case-oriented approach. General-purpose differential diagnosis MMTs, whether taught formally or informally, are not all created equal, and emphasis should be given to those techniques best supported by evidence – noting, however, that there appears to be an aggregate or synergistic benefit to using MMTs together. Medical students may indeed be good judges of which techniques and approaches – which ‘tricks’ – work best for them. Recognizing and embracing that is, perhaps, the real trick to teaching the art and science of differential diagnosis.
Endoscopic management of complete colorectal anastomotic occlusion: Where do we stand?
58cf5c58-3682-4af1-88c8-c045b1fe1024
11886049
Digestive System[mh]
Anastomotic stenosis is a relatively common complication after colorectal surgery with a corresponding incidence that ranges from 3% to 30%. Other than the surgical procedure itself, multiple factors have also been found to be associated with the pathogenesis of this complication such as male sex, neoadjuvant treatment, obesity, sepsis, leakage, pelvic infection, and radiotherapy. The target of treatment is to relieve obstructing symptoms by reestablishing luminal continuity. Surgical revision is feasible but it is better used in refractory cases as its use has been associated with high morbidity. Therefore, therapeutic endoscopy has been recently introduced as a minimally invasive approach for the management of this postoperative complication. Traditionally, anastomotic stenosis is treated with electrical incision, endoscopic balloon dilation, and placement of fully-covered self-expandable metal stents (FCSEMS). However, these techniques cannot be directly used in cases of complete anastomotic occlusion (CAO). In their article, Chi et al presented a novel method for the recanalization of CAO cases with the use of two endoscopes, one for performing electrical incision of the obstructed anastomosis and the second for guide light. Although this technique was found to be associated with various positive outcomes, such as reduced invasiveness, rapid recovery, and shortened hospital stay, it cannot be applied in all cases. This is mainly because the presence of a proximal stoma is a prerequisite for bilateral endoscopic advancement and two experienced endoscopists are required to ensure the safety and effectiveness of the procedure. However, these inherent limitations were not discussed by the authors. Considering these limitations and the absence of a standardized guideline-based procedure for the management of CAO, we reviewed the available literature in an effort to delineate the role of therapeutic endoscopy in the management of this postoperative complication and provide clinicians with the necessary knowledge to improve their daily practice. Techniques for treating completely occluded benign anastomotic strictures Throughout the literature, recanalization was found to be a complex procedure with endoscopic guidance playing a pivotal role in the successful outcome[ - ]. Irrespective of the selected technique, treatment is applied in different steps with the first step being the histological confirmation of the absence of malignancy in the CAO. All reported techniques that have been used for recanalization are outlined in the following sections. Combining electrical penetration with mechanical dilation In this technique, electrical current is applied in repeated, short bursts, making shallow radial incisions in the center of the stenosis at different sites. Electrosurgical knives designed for endoscopic submucosal dissection such as hook and dual knives (Olympus Medical Systems Corp., Tokyo, Japan) are generally preferred for the initial perforation of the obstructed anastomosis over conventional papillotomy or pre-cutting needle knives, as they allow the endoscopist to directly feel and control the incision[ - ]. Anterograde or retrograde incision can be performed. Successful penetration into the proximal colon and absence of immediate complications should be confirmed right after incision[ - ]. The use of two endoscopes can increase the safety of the procedure as the first is used for the incision and the second for guiding and monitoring the intervention. However, this is not always feasible such as in cases where a second endoscopist is not readily available, where the second endoscope cannot reach the stenosis, or where there is a single barrel ileostomy. In these cases, the procedure is performed with one endoscope, and successful penetration is confirmed fluoroscopically[ - ]. After confirmation of the passage into the proximal enteral lumen, a definite treatment of the stenosis can be performed. Throughout the literature, in the majority of cases reported, it was found that the placement of a FCSEMS that remains in situ from 4 to 60 days after hydrostatic balloon dilation of the initial incision site to allow placement of the stent, is an effective and safe therapy[ - ]. Alternatively, in the article by Chi et al , the authors presented a case where recanalization was achieved with electrical stricturotomy and subsequent dilation with the tip of the scope. In this approach, potential perforation and bleeding during the procedure is the most feared complication. Although none occurred in the reported case, single electrocautery treatment can increase the risk of procedure-related complications secondary to tissue retraction in inexperienced hands, leaving this approach to more experienced endoscopists. Combining mechanical penetration with mechanical dilation To avoid electrothermal injury, previous reports showed that penetration to the oral side of the COA could be obtained mechanically[ - ]. In this technique, under direct endoscopic view, the endoscopist exerts direct mechanical force to the center of the anastomosis[ - ]. Blunt and sharp catheters including biopsy forceps, through-the-scope thin dilators such as those included in commercially available transanal drainage systems, stiffened by a reversely inserted guide wire, suprapapillary biliary puncture catheters, trocars and needles used for transjugular intrahepatic portosystemic shunt or sclerotherapy (23-25G, 4-6 mm in length) can be all used for penetration[ - ]. Successful penetration needs to be confirmed prior to continuation of the procedure[ - ]. The use of a second colonoscope as guide light in the same way it was used in the article by Chi et al may be particularly helpful in this technique as it can allow endoscopists to have direct visual control of the advancement of the catheter throughout the procedure, mitigating the risk of serious complications such as perforation. However, if a second endoscopist is not available, confirmation of the intraluminal position of the catheter can be sought fluoroscopically. Subsequently, a guidewire can be passed into the proximal bowel and the stenosis can be treated either with progressive pneumatic dilation with controlled radial expansion balloon dilators until a luminal diameter of 15 to 20 mm is achieved or with deployment of FCSEMS[ - ]. Special attention is required in cases where the stenosis is stiff and thick secondary to post-surgical fibrosis which can complicate the procedure. In these cases, penetration might be better performed with the use of sharp catheters or with the use of another technique. Throughout the literature, both modalities are efficient and safe, as none was found to be associated with immediate or delayed complications, whereas revision surgery was deferred in all cases[ - ]. However, compared to balloon dilation, the placement of a FCSEMS for a period ranging from 4 to 60 days may be preferable as their use was associated with a reduced need for subsequent interventions as approximately half of the patients treated with balloon dilations needed up to 4 sessions before closure of the ileostomy[ - ]. Alternative approaches Recanalization guided by endoscopic ultrasound system: As an alternative to the abovementioned techniques, recanalization of a completely obstructed anastomosis can be performed with the use of an endoscopic ultrasound system (EUS)[ , , - ]. The feasibility of this technique was investigated for the first time in 2008 when a prototype front-view and forward-array echoendoscope was created to guide the process[ , , - ]. Under EUS guidance, penetration in the proximal bowel was performed with a 22-gauge needle and confirmed by using a SpyGlass fiberoptic probe. Serial balloon dilations were then performed to establish patency with excellent results. To date, this technique has been used with variations in the employed modalities[ , - ]. These variations simplified the technique without compromising its safety[ , - ]. The first variation refers to the modality that is used to confirm the successful penetration into the proximal bowel. The selection depends on whether a second endoscope can reach the stenosed anastomosis[ , - ]. In cases where single barrel ileostomy or blind anastomotic ends are present or there is no access to the proximal bowel, the problem is solved with fluoroscopic guidance[ , - ]. In all other cases, a colonoscope can be passed through the external orifice into the proximal lumen, to fill the space with water to improve visibility and accuracy. After the confirmation of successful penetration, a guidewire is passed and the needle is withdrawn, with the second variation referring to the method the endoscopist will select to achieve resolution of the stricture. Needle knives and endoscopic balloon dilators with or without the placement of a FCSEMS to maintain patency were all found to be effective and safe in the management of CAO[ - ]. However, their use can be complicated by fluid leakage, bleeding, perforation and migration[ - ]. To avoid these complications, a novel generation of stents, designed for trans-enteric drainage named lumen-apposing metal stents (LAMS) can be deployed. To date, LAMS have been repeatedly used for the management of complete anastomotic strictures with good results in terms of efficacy and safety[ , - ]. Their deployment is relatively easy after the initial needle puncture without the need of special equipment whereas their placement was found to be associated with reduced need for re-intervention compared to the other modalities, giving LAMS an additional advantage[ , - ]. Transanal endoscopic microsurgery: Transanal endoscopic microsurgery (TEM) is another technique that can be used in the management of complete anastomotic stenosis. In a previous report, Wolthuis et al showed that TEM can be a stable platform that can provide access to the proximal bowel. The procedure requires general anesthesia for the insertion of the 20-cm TEM rectoscope with perforation of the occluded membrane occurring with a vessel-sealing device. After initial perforation, the patency of the anastomosis was established with balloon dilation up to a diameter of 2 cm. However, this technique was complicated by a perforation that became evident after dilation and was treated during the procedure with uneventful recovery of the patient. Nevertheless, we believe that the use of a second endoscope for guide light or fluoroscopic guidance could increase the safety of this procedure, increasing its popularity in the future. Magnetic compression: Magnetic compression is a minimally invasive method for creating surgical anastomosis dating back to 1978. However, a recent anecdotal report supports the use of this technique as an alternative for the recanalization of an occluded anastomosis. Based on this report, two magnets of 17.5 mm in diameter can be delivered by a combination of endoscopic and fluoroscopic guidance and placed in the proximal and distal side of an obstructed anastomosis without the need for general anesthesia. Fluoroscopy can confirm that the magnets have attached to both ends of the obstructed stenosis. After successful placement, the magnets are left in situ before being absorbed with a patent neo-anastomosis being induced after 7 days, uneventfully. The size of the anastomosis was large enough thus making any further endoscopic or surgical intervention, unnecessary. Throughout the literature, recanalization was found to be a complex procedure with endoscopic guidance playing a pivotal role in the successful outcome[ - ]. Irrespective of the selected technique, treatment is applied in different steps with the first step being the histological confirmation of the absence of malignancy in the CAO. All reported techniques that have been used for recanalization are outlined in the following sections. In this technique, electrical current is applied in repeated, short bursts, making shallow radial incisions in the center of the stenosis at different sites. Electrosurgical knives designed for endoscopic submucosal dissection such as hook and dual knives (Olympus Medical Systems Corp., Tokyo, Japan) are generally preferred for the initial perforation of the obstructed anastomosis over conventional papillotomy or pre-cutting needle knives, as they allow the endoscopist to directly feel and control the incision[ - ]. Anterograde or retrograde incision can be performed. Successful penetration into the proximal colon and absence of immediate complications should be confirmed right after incision[ - ]. The use of two endoscopes can increase the safety of the procedure as the first is used for the incision and the second for guiding and monitoring the intervention. However, this is not always feasible such as in cases where a second endoscopist is not readily available, where the second endoscope cannot reach the stenosis, or where there is a single barrel ileostomy. In these cases, the procedure is performed with one endoscope, and successful penetration is confirmed fluoroscopically[ - ]. After confirmation of the passage into the proximal enteral lumen, a definite treatment of the stenosis can be performed. Throughout the literature, in the majority of cases reported, it was found that the placement of a FCSEMS that remains in situ from 4 to 60 days after hydrostatic balloon dilation of the initial incision site to allow placement of the stent, is an effective and safe therapy[ - ]. Alternatively, in the article by Chi et al , the authors presented a case where recanalization was achieved with electrical stricturotomy and subsequent dilation with the tip of the scope. In this approach, potential perforation and bleeding during the procedure is the most feared complication. Although none occurred in the reported case, single electrocautery treatment can increase the risk of procedure-related complications secondary to tissue retraction in inexperienced hands, leaving this approach to more experienced endoscopists. To avoid electrothermal injury, previous reports showed that penetration to the oral side of the COA could be obtained mechanically[ - ]. In this technique, under direct endoscopic view, the endoscopist exerts direct mechanical force to the center of the anastomosis[ - ]. Blunt and sharp catheters including biopsy forceps, through-the-scope thin dilators such as those included in commercially available transanal drainage systems, stiffened by a reversely inserted guide wire, suprapapillary biliary puncture catheters, trocars and needles used for transjugular intrahepatic portosystemic shunt or sclerotherapy (23-25G, 4-6 mm in length) can be all used for penetration[ - ]. Successful penetration needs to be confirmed prior to continuation of the procedure[ - ]. The use of a second colonoscope as guide light in the same way it was used in the article by Chi et al may be particularly helpful in this technique as it can allow endoscopists to have direct visual control of the advancement of the catheter throughout the procedure, mitigating the risk of serious complications such as perforation. However, if a second endoscopist is not available, confirmation of the intraluminal position of the catheter can be sought fluoroscopically. Subsequently, a guidewire can be passed into the proximal bowel and the stenosis can be treated either with progressive pneumatic dilation with controlled radial expansion balloon dilators until a luminal diameter of 15 to 20 mm is achieved or with deployment of FCSEMS[ - ]. Special attention is required in cases where the stenosis is stiff and thick secondary to post-surgical fibrosis which can complicate the procedure. In these cases, penetration might be better performed with the use of sharp catheters or with the use of another technique. Throughout the literature, both modalities are efficient and safe, as none was found to be associated with immediate or delayed complications, whereas revision surgery was deferred in all cases[ - ]. However, compared to balloon dilation, the placement of a FCSEMS for a period ranging from 4 to 60 days may be preferable as their use was associated with a reduced need for subsequent interventions as approximately half of the patients treated with balloon dilations needed up to 4 sessions before closure of the ileostomy[ - ]. Recanalization guided by endoscopic ultrasound system: As an alternative to the abovementioned techniques, recanalization of a completely obstructed anastomosis can be performed with the use of an endoscopic ultrasound system (EUS)[ , , - ]. The feasibility of this technique was investigated for the first time in 2008 when a prototype front-view and forward-array echoendoscope was created to guide the process[ , , - ]. Under EUS guidance, penetration in the proximal bowel was performed with a 22-gauge needle and confirmed by using a SpyGlass fiberoptic probe. Serial balloon dilations were then performed to establish patency with excellent results. To date, this technique has been used with variations in the employed modalities[ , - ]. These variations simplified the technique without compromising its safety[ , - ]. The first variation refers to the modality that is used to confirm the successful penetration into the proximal bowel. The selection depends on whether a second endoscope can reach the stenosed anastomosis[ , - ]. In cases where single barrel ileostomy or blind anastomotic ends are present or there is no access to the proximal bowel, the problem is solved with fluoroscopic guidance[ , - ]. In all other cases, a colonoscope can be passed through the external orifice into the proximal lumen, to fill the space with water to improve visibility and accuracy. After the confirmation of successful penetration, a guidewire is passed and the needle is withdrawn, with the second variation referring to the method the endoscopist will select to achieve resolution of the stricture. Needle knives and endoscopic balloon dilators with or without the placement of a FCSEMS to maintain patency were all found to be effective and safe in the management of CAO[ - ]. However, their use can be complicated by fluid leakage, bleeding, perforation and migration[ - ]. To avoid these complications, a novel generation of stents, designed for trans-enteric drainage named lumen-apposing metal stents (LAMS) can be deployed. To date, LAMS have been repeatedly used for the management of complete anastomotic strictures with good results in terms of efficacy and safety[ , - ]. Their deployment is relatively easy after the initial needle puncture without the need of special equipment whereas their placement was found to be associated with reduced need for re-intervention compared to the other modalities, giving LAMS an additional advantage[ , - ]. Transanal endoscopic microsurgery: Transanal endoscopic microsurgery (TEM) is another technique that can be used in the management of complete anastomotic stenosis. In a previous report, Wolthuis et al showed that TEM can be a stable platform that can provide access to the proximal bowel. The procedure requires general anesthesia for the insertion of the 20-cm TEM rectoscope with perforation of the occluded membrane occurring with a vessel-sealing device. After initial perforation, the patency of the anastomosis was established with balloon dilation up to a diameter of 2 cm. However, this technique was complicated by a perforation that became evident after dilation and was treated during the procedure with uneventful recovery of the patient. Nevertheless, we believe that the use of a second endoscope for guide light or fluoroscopic guidance could increase the safety of this procedure, increasing its popularity in the future. Magnetic compression: Magnetic compression is a minimally invasive method for creating surgical anastomosis dating back to 1978. However, a recent anecdotal report supports the use of this technique as an alternative for the recanalization of an occluded anastomosis. Based on this report, two magnets of 17.5 mm in diameter can be delivered by a combination of endoscopic and fluoroscopic guidance and placed in the proximal and distal side of an obstructed anastomosis without the need for general anesthesia. Fluoroscopy can confirm that the magnets have attached to both ends of the obstructed stenosis. After successful placement, the magnets are left in situ before being absorbed with a patent neo-anastomosis being induced after 7 days, uneventfully. The size of the anastomosis was large enough thus making any further endoscopic or surgical intervention, unnecessary. Throughout the literature, the endoscopic treatment of CAO is a complex procedure requiring rather sophisticated steps and equipment. Endoscopic and fluoroscopic guidance is of utmost important for ensuring a successful and safe recanalization with their selection being directed by patient-related factors such the ostomy type and the characteristics of the anastomosis as well as local availability and expertise. Among the various available techniques, there is insufficient evidence to make recommendations with the final selection being driven by each endoscopist’s level of experience. In order to make recommendations, the development of a central registry is required that would allow direct comparison of the available techniques.
Effects of Local and Systemic Metronidazole as Adjunctive Treatment in Chronic Periodontitis Patients
011beb7e-e47f-42b1-8b72-f2ee67c2c21d
11608504
Dentistry[mh]
Introduction Chronic periodontitis is an inflammatory and progressive disease causing the destruction and loss of tooth‐supporting tissues (Klokkevold, Newman, and Takei ). Periodontal diseases are highly prevalent and affect about 20%–50% of the world population (Nazir ). Various bacteria, such as Gram‐negative species of obligate anaerobes, are critical factors in periodontal diseases (Klokkevold, Newman, and Takei ). Accordingly, successful treatment of this disease depends on stopping tissue destruction and eliminating or controlling etiological factors. Currently, the accepted treatment protocol is regular scaling and root planning (SRP) in affected areas. Nevertheless, pathogenic organisms may remain in periodontal tissues and dentinal tubules that cannot be removed by mechanical treatment alone (Greenstein and Caton ). Thus, due to the association of periodontal disease with anaerobic microorganisms, antimicrobial regimens have been added to conventional mechanical methods in treating this disease (Klokkevold, Newman, and Takei ). Several studies have evaluated antimicrobial agents such as metronidazole, amoxicillin, azithromycin, tetracycline, doxycycline, and clindamycin and their effects on treating invasive and chronic periodontitis (Luchian et al. ). Antibiotics used systemically penetrate the tooth's periodontal tissues and deep into the periodontal pockets. Therefore, systemic antibiotics can also target bacteria unreachable by mechanical debridement techniques or local antibiotics (Slots and Ting ). On the other hand, long‐term systemic antibiotic therapy increases the risk of problems such as antibiotic resistance (Loesche ) and adverse drug reactions such as nausea, diarrhea, and pseudomembranous colitis (Slots and Ting ). In addition, systemic use of antibiotics requires high patient cooperation. Due to such problems, studies focusing on developing local drug systems to release antibiotics in periodontal pockets are increasing (Rams and Slots ). Various drug delivery systems such as gels, micro and nanoparticles, fibers, and strips have been available recently (Jepsen and Jepsen ). These systems increase the drug concentration at the target site and minimize the side effects of systemic drug use (Rams and Slots ). Due to the similar clinical results of different antibiotics, none is superior to the others (Luchian et al. ). Metronidazole is effective against periodontal pathogens and bactericidal against anaerobic organisms and is usually used alone or combined with amoxicillin in periodontitis treatment (Manoor). Various studies have demonstrated the clinical benefits of metronidazole (gel or tab) in treating chronic periodontitis (Slots and Ting ; Loesche ; Rams and Slots ; Manoor; Loesche et al. , ; Kadkhoda et al. ). Hence, in the present study, metronidazole (used as a local gel or as a systemic tablet) as an adjunct to conventional mechanical therapy (SRP) by the ultrasonic device in patients with chronic periodontitis referred to Alborz dentistry clinic and dental office in Urmia, Iran, was evaluated. Materials and Methods This study is a single‐blind, randomized clinical trial to compare the effects of metronidazole gel and tablets (in intervention groups) with the SRP alone (in the control group). 2.1 Study Population The study population consisted of patients beyond Stage II periodontitis (localized or generalized) and Grade B referred to the Alborz dentistry clinic and dental office in Urmia, Iran. Based on the results of the Kadkhoda et al. study, considering α = 0.05 and power of 0.8 and using the formula for calculating the sample size for the mean difference in the population, the number of samples was 28. Including shedding, the sample size was 30 patients in each study group selected by random sampling method. Inclusion criteria included those over 30 years of age, patients diagnosed with beyond Stage II periodontitis (localized or generalized) and Grade B, with at least three pockets with probing depth greater than 4 mm, good general health, patients who have not had any anti‐inflammatory drugs in the last 6 months or have not taken antibiotics, patients who have at least 15 teeth (except the third molar) in their mouths, patients who have good cooperation, and who have signed an informed written consent form. Patients without inclusion criteria and an O'Leary plaque index of more than 15% after proper oral health education and a history of allergy to metronidazole were excluded from the study. 2.2 Randomization In this study, 30 patients (3 teeth per patient) diagnosed with beyond Stage II periodontitis (localized or generalized) and in Grade B were divided into a 10‐person control group and two 10‐person intervention groups based on a pre‐generated randomization order, which was random, and the arrangement pattern was completely random. The block randomization method with 3 and 6 blocks was used to divide the samples. 2.3 Intervention The researcher was blind to the type of drug prescribed to each patient. Informed consent was obtained from the patients, and all stages of the study were explained. All patient information remained confidential. The ethics code was obtained from the ethics committee of Urmia University of Medical Sciences: IR.UMSU.REC.1399.319, and the trial was registered in Iranian clinical trials ( https://en.irct.ir/ ) IRCT20210408050898N1. Patients in the control group (Group A) received only standard SRP treatment by the ultrasonic device (Figure ). Also, all patients in intervention groups first received standard SRP treatment. Patients in intervention Group B received adjuvant therapy in the form of metronidazole tablets (250 mg every 8 h for 7 days) by a second person (Noyan et al. ). In patients in intervention Group C after standard SRP, 0.75% metronidazole gel (Parseh, Tehran, Iran) was administered inside three pockets in the study by the second person using a syringe with a non‐cutting end until the pocket was completely filled (Figure ). Patients were advised not to eat or drink for 2 h after receiving the gel (Noyan et al. ). The gel was reused in the same way 1 week, 2 weeks, and 3 weeks later by the same person (Kadkhoda et al. ). The plaque control of patients included in the study was evaluated every 2 weeks by assessing the O'Leary plaque index. Mean pocket probing depth (PPD), mean clinical attachment loss (CAL), and bleeding on probing (BOP) in randomly selected pockets equal to or greater than 4 mm in each patient were evaluated by a periodontal probe (Probe UNC 15; Hu‐Friedy, Chicago, IL, USA) at two times, including: 1. Before starting the treatment. 2. 3 months after starting the treatment. 2.4 Data Analysis First, central tendency and dispersion measures were calculated and described for each group. So, the Shapiro‐Wilk test was used to check the normality of the variables. Then Wilcoxon test was used to evaluate different treatments' effects on BOP, CAL, and PPD at the beginning of treatment and after 3 months and to compare the mean of the variables in several groups, we used the Kruskal–Wallis test. Statistical analysis was performed using SPSS Statics for Windows, version 22.0. Armonk, NY, USA: IBM Corp; 2013; the significant level was set at p < 0.05. Study Population The study population consisted of patients beyond Stage II periodontitis (localized or generalized) and Grade B referred to the Alborz dentistry clinic and dental office in Urmia, Iran. Based on the results of the Kadkhoda et al. study, considering α = 0.05 and power of 0.8 and using the formula for calculating the sample size for the mean difference in the population, the number of samples was 28. Including shedding, the sample size was 30 patients in each study group selected by random sampling method. Inclusion criteria included those over 30 years of age, patients diagnosed with beyond Stage II periodontitis (localized or generalized) and Grade B, with at least three pockets with probing depth greater than 4 mm, good general health, patients who have not had any anti‐inflammatory drugs in the last 6 months or have not taken antibiotics, patients who have at least 15 teeth (except the third molar) in their mouths, patients who have good cooperation, and who have signed an informed written consent form. Patients without inclusion criteria and an O'Leary plaque index of more than 15% after proper oral health education and a history of allergy to metronidazole were excluded from the study. Randomization In this study, 30 patients (3 teeth per patient) diagnosed with beyond Stage II periodontitis (localized or generalized) and in Grade B were divided into a 10‐person control group and two 10‐person intervention groups based on a pre‐generated randomization order, which was random, and the arrangement pattern was completely random. The block randomization method with 3 and 6 blocks was used to divide the samples. Intervention The researcher was blind to the type of drug prescribed to each patient. Informed consent was obtained from the patients, and all stages of the study were explained. All patient information remained confidential. The ethics code was obtained from the ethics committee of Urmia University of Medical Sciences: IR.UMSU.REC.1399.319, and the trial was registered in Iranian clinical trials ( https://en.irct.ir/ ) IRCT20210408050898N1. Patients in the control group (Group A) received only standard SRP treatment by the ultrasonic device (Figure ). Also, all patients in intervention groups first received standard SRP treatment. Patients in intervention Group B received adjuvant therapy in the form of metronidazole tablets (250 mg every 8 h for 7 days) by a second person (Noyan et al. ). In patients in intervention Group C after standard SRP, 0.75% metronidazole gel (Parseh, Tehran, Iran) was administered inside three pockets in the study by the second person using a syringe with a non‐cutting end until the pocket was completely filled (Figure ). Patients were advised not to eat or drink for 2 h after receiving the gel (Noyan et al. ). The gel was reused in the same way 1 week, 2 weeks, and 3 weeks later by the same person (Kadkhoda et al. ). The plaque control of patients included in the study was evaluated every 2 weeks by assessing the O'Leary plaque index. Mean pocket probing depth (PPD), mean clinical attachment loss (CAL), and bleeding on probing (BOP) in randomly selected pockets equal to or greater than 4 mm in each patient were evaluated by a periodontal probe (Probe UNC 15; Hu‐Friedy, Chicago, IL, USA) at two times, including: 1. Before starting the treatment. 2. 3 months after starting the treatment. Data Analysis First, central tendency and dispersion measures were calculated and described for each group. So, the Shapiro‐Wilk test was used to check the normality of the variables. Then Wilcoxon test was used to evaluate different treatments' effects on BOP, CAL, and PPD at the beginning of treatment and after 3 months and to compare the mean of the variables in several groups, we used the Kruskal–Wallis test. Statistical analysis was performed using SPSS Statics for Windows, version 22.0. Armonk, NY, USA: IBM Corp; 2013; the significant level was set at p < 0.05. Results There were 12 females (40%) and 18 males (60%) out of 30 participants. The results for all CAL, PPD, and BOP parameters at the beginning of treatment (Index i) and after 3 months (Index f) for all treatment groups were less than 0.0001, revealing all data have skewed distribution. Table shows the mean of BOP, PPD, and CAL parameters in the three treatment groups. This table shows the changes in these parameters for the three treatment groups before treatment and after 3 months. The Kruskal–Wallis test was used to compare the effect of drug use in the three groups. BOP had the most significant reduction in treatment with SRP alone and the group receiving metronidazole gel as an adjunctive treatment. The mean CAL in the side treatment with metronidazole gel and the mean PPD in the side treatment with metronidazole tablets or gel had the highest decrease. Since the data had a skewed distribution, the Wilcoxon test was used to evaluate different treatments' effects on BOP, CAL, and PPD, and the results are shown in Table . Based on the results, the values of BOP, CAL, and PPD at the beginning of treatment (Index i) and after 3 months (Index f) are significantly different, and this is true for all treatments. The Kruskal–Wallis test was performed to evaluate and compare the effects of the type of treatment in three groups, and the results are shown in Table . At the beginning of treatment (with Index i) and after 3 months (with Index f), the values of BOP, CAL, and PPD did not differ significantly between the three groups ( p > 0.05). In other words, all treatments have been equally effective. Discussion Anaerobic bacteria are one of the critical factors in periodontal diseases (Klokkevold, Newman, and Takei ). Since SRP treatment, which is the standard treatment in periodontal diseases, is not able to remove all the bacteria in the depth of the pocket (Greenstein and Caton ), local or systemic antimicrobials can access the deeper parts of periodontal pockets. According to the said contents and considering that currently, the use of local drug therapy systems is worthy of attention, the purpose of this one‐sided blind study is to investigate the effect of local 0.75% metronidazole gel as an adjunctive treatment in the treatment of patients suffering from beyond Stage II periodontitis in the Department of Periodontology, Faculty of Dentistry, Urmia University, Urmia, Iran. Since the use of systemic metronidazole as an adjunctive treatment in the treatment of periodontal diseases has been recommended in numerous articles (Slots and Ting ; Loesche ; Jepsen and Jepsen ; Loesche et al. ; Noyan et al. ; Faveri, Figueiredo, and Feres ; Sgolastra et al. ), the use of 0.75% metronidazole gel can be considered as an alternative to systemic metronidazole only when it has at least the same effect as systemic metronidazole. In this one‐sided blind study, patients diagnosed with periodontitis beyond Stage II and Grade B were classified into three groups receiving SRP treatment, SRP treatment plus systemic metronidazole as an adjunctive treatment, and SRP treatment plus metronidazole gel as an adjunctive treatment. All study subjects received SRP treatment because antimicrobial substances have little effect on biofilms, and mechanical debridement leads to an increased impact of the drug, locally and systemically (Pandit et al. ). A calibrated clinician, utterly blinded to the type of treatments in each of the groups, evaluated the clinical parameters. This blinding was done to reduce the potential of bias. Metronidazole gel was renewed every 7 days in the gel group because of its short substantivity to have an adequate level of the drug in the envelopes (Pandit et al. ). In addition, it has been shown that the experience of the operator performing the mechanical debridement and the time spent for SRP are essential factors in the effectiveness of the treatment, and the lack of adequate debridement in the pockets can lead to treatment‐resistant pockets (Perinetti et al. ). For this reason, in this study, all SRPs were completed by an experienced operator without setting a time limit to perform SRP in one or, if necessary, two sessions to minimize the confounding effects of not having a proper SRP. Because of the different levels of plaque and calculus in different patients, each patient may need to spend less or more time. In this study, the patients' plaque index was checked every 2 weeks to eliminate the effect of inappropriate plaque control on the treatment results and to assess the cooperation of the patients in complying with oral hygiene education. So, patients with more than 15% plaque index were excluded from the study due to the confounding effects of plaque and the high probability of patient non‐cooperation. The ideal recall interval in the treatment of supportive periodontal diseases is 90 days (Perinetti et al. ), which was considered in this study. In this study, BOP, PPD, and CAL were significantly different at the beginning of treatment and after 3 months, and this is true for all three treatments. In other words, there was a significant difference in PPD levels at both stages of the study ( p < 0.001). Furthermore, PPD levels and BOP did not differ significantly between the three groups at the beginning of treatment and after 3 months. In other words, all treatments were equally effective, and no significant difference was seen ( p > 0.05). Nevertheless, in a randomized clinical trial in 2011, Kadkhoda et al. evaluated the effect of local antibiotic therapy with metronidazole with SRP in treating aggressive periodontitis. Statistical comparison of PPD between the control and study group showed that in 6–12 weeks of follow‐up after the initial treatment, the p ‐value reached from 0.58 (insignificant) to 0.002 (significant) and then 0.001 (significant). Also, the mean BOP at the beginning of treatment was almost the same between the two groups, but over time, there was a more significant decrease in the experimental group than the control group, making a statistically significant difference between the two groups ( p = 0.005). Also, in 2019, Montaruli et al. conducted a clinical trial study in Italy to evaluate the effect of local metronidazole with nonsurgical treatment in patients with chronic periodontitis. They used the Wilcoxon test for PPD values before and after treatment in the control and experimental groups, which significantly changed patients' treatment response ( p < 0.05). Some reasons for these differences are the relatively small number of studies, the low power of the study to detect statistical disparities, and the low time considered for the analysis of clinical parameters. In the reviewed articles, in some cases, it was seen that the clinical parameters were not different at 1–3 months, and then at 3–12 months, these parameters were significantly different. Also, in most studies on local metronidazole usage, higher concentrations of metronidazole gel have been used. Our study used a concentration of 0.75% due to the lack of access to higher concentrations. Similar to the present study, Bergamaschi et al. conducted a pilot clinical trial to compare the effect of local metronidazole gel or systemic tablet use as adjunctive therapy in line with complete oral debridement in smoker patients with chronic periodontitis. PPD improved in all groups from the first month to the last evaluation time ( p < 0.05). However, no statistically significant difference ( p > 0.05) was observed between the groups when each time point was considered separately. Also, in Perinetti's study regarding the effect of 1% metronidazole gel as an adjunctive treatment, the changes in the mean BOP, PPD, and CAL between the group receiving the gel and the control group, despite the evidence of clinical improvement, were not statistically different from each other. Pandit et al. evaluated the effect of minocycline microparticles and 25% metronidazole gel as an adjunct therapy with SRP in treating chronic periodontitis in a randomized clinical trial. In this study, patients receiving SRP treatment with metronidazole gel had a more clinically significant improvement in CAL than the control group. Nevertheless, the difference in CAL increase was not statistically significant. In recent years, various methods and drugs have been used to treat periodontal diseases. Anti‐rheumatoid medications are among the drugs intended for the treatment of periodontal diseases. Considering the inflammatory nature of both rheumatoid arthritis and periodontal diseases, various drugs with anti‐inflammatory and immune system‐modulating properties that are used in the treatment of rheumatoid arthritis have also been investigated in periodontal therapy, such as MMP (matrix metalloproteinases) inhibitors, anticytocins, corticosteroids, DMARDs (disease‐modifying antirheumatic drugs). However, the definitive effect of these drugs on the treatment of periodontal disease is unknown and requires more studies in this field (Martu et al. ). One of the other interesting methods studied by Nicolae et al. is the use of photoactivated blue‐O Toluidine in treating peri‐implantitis. Peri‐implantitis is a condition in which pathogens on the implant lead to a series of inflammatory responses in the periodontium surrounding the implant. Apparently, in peri‐implantitis, we have an increase in those bacterial species that have an essential role in periodontitis. In this study, the intervention group with peri‐implantitis was treated with Photoactivated Blue‐O Toluidine in addition to SRP, and the results indicated that patients in the intervention group showed significant improvement in pocket depth and local inflammation parameters. Other treatments that have received attention in recent years are natural substances for oral and dental care and periodontal health maintenance. The reason for this attention is the more natural, available, low cost, and high safety of these compounds. For example, it has been shown that various substances such as Anacardium occidentale extract, Soy isoflavones, and other bioflavonoids, dentifrice containing S. persica , dentifrice containing Carica papaya , citratus, and A. indica extract have shown remarkable results in improving various periodontal parameters (Budala et al. ). Limitations The findings of this study have to be seen in light of some limitations. At first, we can mention the relatively small number of investigated cases. Due to the coincidence of this study with the COVID‐19 pandemic and the lack of access to the calculated sample size, three teeth from each person were included in the study. As a result, the study may not have enough power to detect statistical differences. The second limitation is the 3‐month follow‐up time to check the clinical parameters. In the reviewed articles, in some cases, it was seen that the clinical parameters were without significant difference between 1 and 3 months, and after that, between 3 and 12 months, these parameters have found a significant difference. Also, in most of the studies regarding the use of topical medicine, higher concentrations of metronidazole gel have been used. In this study, due to existing sanctions and lack of access to metronidazole gel with higher concentrations, the concentration of 0.75% has been used to evaluate the effectiveness of this accessible concentration. Therefore, to confirm these findings, more studies with a larger statistical population, more extended follow‐up periods, or higher concentrations of metronidazole gel, if available, are needed. Conclusion Metronidazole gel and tablet and SRP treatment were all effective in improving the clinical parameters of CAL, PPD, and BOP. Metronidazole locally or systemically in patients with periodontitis beyond Stage II and Grade B compared to SRP alone did not affect clinical findings. And SRP treatment is still considered as the gold standard in the treatment of periodontal diseases. Further studies are needed to confirm the results. Maryam Mehravani and Ehsan Houshyar convinced the original idea and carried out the experiment and also wrote the manuscript with the support of Sheida Jamalnia and Rasul Ghareaghaji. All the interventions performed in this study, which included human participants, conformed to the ethical standards of the Institutional and National Research Committee and following the 1964 Declaration of Helsinki and its subsequent amendments, as well as ethical standards. Written informed consent was obtained from all patients participating in the study. All patient information remained confidential. The authors declare no conflicts of interest.
Living well with kidney disease by patient and care-partner empowerment: kidney health for everyone everywhere
d9dfe01b-7d6a-43a3-aa74-bd248f38829f
7958092
Patient Education as Topic[mh]
CKD, its associated symptoms, and its treatment, including medications, dietary and fluid restrictions, and kidney replacement therapy can disrupt and constrain daily living, and impair the overall quality of life of patients and their family members. Consequently, this can also impact treatment satisfaction and clinical outcomes . Despite this, the past several decades have seen limited improvement in the quality of life of people with CKD . To advance research, practice, and policy, there is increasing recognition of the need to identify and address patient priorities, values, and goals . Several regional and global kidney health projects have addressed these important questions including the Standardised Outcomes in Nephrology (SONG) with more than 9000 patients, family members, and health professionals from over 70 countries . Across all treatment stages, including CKD, dialysis and transplantation, SONG participating children and adults with CKD consistently gave higher priority to symptoms and life impacts than health professionals . In comparison, health professionals gave higher priority to mortality and hospitalization than patients and family members. The patient-prioritized outcomes are shown in Fig. . Irrespective of the type of kidney disease or treatment stage, patients wanted to be able to live well, maintain their role and social functioning, protect some semblance of normality, and have a sense of control over their health and wellbeing. Life participation, defined as the ability to do meaningful activities of life including, but not limited to, work, study, family responsibilities, travel, sport, social, and recreational activities, was established a critically important outcome across all treatment stages of CKD . The quotations from patients with kidney disease provided in Box 1 demonstrates how life participation reflects the ability to live well with CKD . According to the World Health Organization (WHO), participation refers to “involvement in a life situation.” This concept is more specific than the broader construct of quality of life. Life participation places the life priorities and values of those affected by CKD and their family at the center of decision-making. The World Kidney Day Steering Committee calls for the inclusion of life participation, a key focus in the care of patients with CKD, to achieve the ultimate goal of living well with kidney disease. This calls for the development and implementation of validated patient-reported outcome measures, that could be used to assess and address areas of life participation in routine care. Monitoring of life participation could be supported by regulatory agencies as a metric for quality care or to support labelling claims for medicines and devices. Funding agencies could establish targeted calls for research that address the priorities of patients, including life participation. Box 1 Quotations from patients with CKD related to priorities for living well Patients with CKD and their family members including care partners should be empowered to achieve the health outcomes and life goals that are meaningful and important to them. The WHO defines patient empowerment as “a process through which people gain greater control over decisions or actions affecting their health,” which requires patients to understand their role, to have knowledge to be able to engage with clinicians in shared decision-making, skills, and support for self-management. For patients receiving dialysis, understanding the rationale for a lifestyle change, having access to practical assistance and family support promoted patient empowerment, while feeling limited in life participation undermined their sense of empowerment . The World Kidney Day Steering Committee advocates for strengthened partnership with patients in the development, implementation, and evaluation of interventions for practice and policy settings, that enable patients to live well with kidney diseases. This needs to be supported by consistent, accessible, and meaningful communication. Meaningful involvement of patients and family members across the entire research process, from priority setting and planning the study through to dissemination and implementation, is now widely advocated . There have also been efforts, such as the Kidney Health Initiative , to involve patients in the development of drugs and devices to foster innovation . We urge for greater emphasis on a strength-based approach as outlined in Table , which encompasses strategies to support patient resilience, harness social connections, build patient awareness and knowledge, facilitate access to support, and establish confidence and control in self-management. The strength-based approach is in contrast to the medical model where chronic disease is traditionally focussed on pathology, problems, and failures . Instead, the strength-based approach acknowledges that each individual has strengths and abilities to overcome the problems and challenges faced, and requires collaboration and cultivation of the patient’s hopes, aspirations, interests, and values. Efforts are needed to ensure that structural biases, discrimination, and disparities in the health care system also need to be identified, so all patients are given the opportunity to have a voice. A care partner is often an informal caregiver who is also a family member of the patient with CKD . They may take on a wide range of responsibilities including coordinating care (including transportation to appointments), administration of treatment including medications, home dialysis assistance, and supporting dietary management. Caregivers of patients with CKD have reported depression, fatigue, isolation, and also burnout. The role of the care partner has increasingly become more important in CKD care given the heightened complexity in communicative and therapeutic options including the expansion of telemedicine under the COVID-19 pandemic and given the goal to achieve higher life expectancy with CKD . The experience of caring for a partially incapacitated family member with progressive CKD can represent a substantial burden on the care partner and may impact family dynamics. Not infrequently, the career goals and other occupational and leisure aspects of the life of the care partner are affected because of CKD care partnership, leading to care-partner overload and burnout. Hence, the above-mentioned principles of life participation need to equally apply to care partners as well as all family members and friends involved in CKD care. In low- and lower middle-income countries (LICs and LMICs) including in sub-Saharan Africa, South East Asia, and Latin America, patient’s ability to self-manage or cope with the chronic disease vary but may often be influenced by internal factors including spirituality, belief system, and religiosity, and external factors including appropriate knowledge of the disease, poverty, family support system, and one’s grit and social relations network. The support system comprising healthcare providers and caregivers plays a crucial role as most patients rely on them in making decisions, and for the necessary adjustments in their health behavior . In LIC regions, where there are often a relatively low number of physicians and even lower number of kidney care providers per population especially in rural areas, a stepwise approach can involve local and national stakeholders including both non-governmental organizations and government agencies by (1) extending kidney patient education in rural areas, (2) adapting telehealth technologies if feasible to educate patients and train local community kidney care providers and (3) implementing effective retention strategies for rural kidney health providers including adapting career plans and competitive incentives. Many patients in low-resource settings present in very late stage needing to commence emergency dialysis . The very few fortunate ones to receive kidney transplantation may acquire an indescribable chance to normal life again, notwithstanding the high costs of immunosuppressive medications in some countries. For some patients and care partners in low-income regions, spirituality and religiosity may engender hope, when ill they are energized by the anticipation of restored health and spiritual wellbeing. For many patients, informing them of a diagnosis of kidney disease is a harrowing experience both for the patient (and caregivers) and the healthcare professional. Most patients present to kidney physicians (usually known as “renal physicians” in many of these countries) with trepidations and apprehension. It is rewarding, therefore, to see the patient’s anxiety dissipate after reassuring him or her of a diagnosis of simple kidney cysts, urinary tract infection, simple kidney stones, solitary kidneys, etc., that would not require extreme measures like kidney replacement therapy. Patients diagnosed with glomerulonephritis who have an appropriate characterization of their disease from kidney biopsies and histology; who receive appropriate therapies and achieve remission are relieved and are very grateful. Patients are glad to discontinue dialysis following resolution of AKI or acute on CKD. Many CKD patients who have residual kidney function appreciate being maintained in a relatively healthy state with conservative measures, without dialysis. They experience renewed energy when their anemia is promptly corrected using erythropoiesis-stimulating agents. They are happy when their peripheral oedema resolves with treatment. For those on maintenance hemodialysis who had woeful stories from emergency femoral cannulations, they appreciate the construction of good temporary or permanent vascular accesses. Many patients in low-resource settings present in very late stage needing to commence emergency dialysis. Patients remain grateful for waking from a uremic coma or recovering from recurrent seizures when they commence dialysis. World Kidney Day 2021 theme on ‘Living Well with Kidney Disease’ is deliberately chosen to have the goals to redirect more focus on plans and actions towards achieving patient-centred wellness. “Kidney Health for Everyone, Everywhere” with emphasis on patient-centred wellness should be a policy imperative that can be successfully achieved if policy-makers, nephrologists, health care professionals, patients, and care partners place this within the context of comprehensive care. The requirement of patient engagement is needed. World Health Organization (WHO) in 2016 put out an important document on patient empowerment (WHO 2016): ‘Patient engagement is increasingly recognized as an integral part of health care and a critical component of safe people-centred services. Engaged patients are better able to make informed decisions about their care options. In addition, resources may be better used if they are aligned with patients’ priorities and this is critical for the sustainability of health systems worldwide. Patient engagement may also promote mutual accountability and understanding between patients and health care providers. Informed patients are more likely to feel confident to report both positive and negative experiences and have increased concordance with mutually agreed care management plans. This not only improves health outcomes but also advances learning and improvement while reducing adverse events.’ In the ISN Community Film Event at World Congress of Nephrology (WCN) 20 (ISN Community Film Event 2020), it is good to see a quote in the film from patients: “Tell me. I will forget; Show me. I will remember; Involve me. I will understand.” ISN Global Kidney Policy Forum 2019 included a patient speaker Nicki Scholes-Robertson from New Zealand: ‘Culturally appropriate and sensitive patient information and care are being undertaken in New Zealand to fight inequities in kidney health, especially in Maori and other disadvantaged communities.’ World Kidney Day 2021 would like to promote to the policy-makers on increasing focus and resources on both drug and non-drug programmes in improving patient wellness. Examples include funding for erythropoiesis-stimulating agents and anti-pruritic agents for managing anemia and itchiness, respectively, just name but a few . Home dialysis therapies have been consistently found to improve patient autonomy and flexibility, quality of life in a cost-effective manner, enhancing life participation. Promoting home dialysis therapies should tie in with appropriate ‘assisted dialysis’ programs to reduce patient and care partner fatigue and burnout. Also, examples like self-management programmes, cognitive behavioural therapy, and group therapies for managing depression, anxiety, and insomnia should be promoted before resorting to medications . The principle of equity recognizes that different people with different levels of disadvantage require different approaches and resources to achieve equitable health outcomes. The kidney community should push for adapted care guidelines for vulnerable and disadvantaged populations. The involvement of primary care and general physicians especially in LICs and LMICs would be useful in improving the affordability and access to services through the public sector in helping the symptom management of CKD patients and improve their wellness. In the overall wellness program for kidney disease patients, the need for prevention should be reiterated. Early detection with a prolonged course of wellness despite kidney disease, after an effective secondary prevention program, should be promoted . Prevention of CKD progression can be attempted by lifestyle and diet modifications such as a plant-dominant low-protein diet and by means of effective pharmacotherapy including administration of sodium-glucose transport protein 2 (SGLT2) inhibitors . WKD 2021 continues to call for increased awareness of the importance of preventive measures throughout populations, professionals, and policy-makers, applicable to both developed and developing countries . Effective strategies to empower patients and their care partners strive to pursue the overarching goal of minimizing the burden of CKD-related symptoms to enhance patient satisfaction, health-related quality of life, and life participation. World Kidney Day 2021 theme on ‘Living Well with Kidney Disease” is deliberately chosen to have the goals to redirect more focus on plans and actions towards achieving patient-centered wellness. Notwithstanding the COVID-19 pandemic that had overshadowed many activities in 2020 and beyond, the World Kidney Day Steering Committee has declared 2021 the year of “Living well with Kidney Disease” in an effort to increase education and awareness on the important goal of effective symptom management and patient empowerment. Whereas the World Kidney Day continues to emphasize the importance of effective measures to prevent kidney disease and its progression , patients with preexisting kidney disease and their care partners should feel supported to live well through concerted efforts by kidney care communities and other stakeholders throughout the world even during a world-shattering pandemic as COVID-19 that may drain many resources . Living well with kidney disease is an uncompromisable goal of all kidney foundations, patient groups, and professional societies alike, to which the International Society of Nephrology and the International Federation of Kidney Foundation World Kidney Alliance are committed at all times.
Impact of BRCA mutations, age, surgical indication, and hormone status on the molecular phenotype of the human Fallopian tube
4ceab6b0-9d26-4249-85db-28a023f9149e
11947093
Biochemistry[mh]
The human Fallopian tube (FT), also known as the uterine tube or oviduct, plays a crucial role in female reproductive biology. It is a highly specialized organ for transporting the ovum and is the site for fertilization and early development. It is now clear that the FT is a site of origin for pelvic serous cancer, a spectrum of diseases that includes high-grade serous cancer of the ovary, FT, and peritoneum – . A widely accepted tumorigenesis model includes stepwise progression starting with clonal expansion of histologically normal, lowly proliferative p53 -mutated, secretory epithelial cells. This is followed by progressive accumulation of additional molecular alterations, increased cellular proliferation, and cytologic atypia ultimately leading to a serous tubal intraepithelial cancer (STIC), localized almost exclusively in the fimbria of the distal FT. Invasive serous cancer arises directly from STIC cells within the FT or in the ovary/peritoneal cavity as a consequence of exfoliation of STIC cells from the lumen of the FT – . Phylogenetic analyses suggest the entire process spans decades with p53 signatures arising 20 or more years before serous cancer diagnosis and STIC lesions and early invasive cancer developing roughly 6.5 years and 2 years before diagnosis respectively , . BRCA1 and BRCA2 mutation carriers (hereafter BRCAm) are at significantly elevated risk for serous ovarian cancer with a lifetime risk of ~15–40%, depending on the specific mutation, relative to those without pathogenic mutations (hereafter non-BRCAm). An estimated 30% of serous cancers have BRCA1/2 functional loss , . Risk increases with age and is reduced by breastfeeding, oral contraceptive pill use, and prior tubal ligation . It is well established that surgical removal of FTs and ovaries substantially reduces risk and recent data suggest that removal of the FT alone without oophorectomy also reduces risk but the magnitude is uncertain – . The molecular mechanisms whereby germline BRCA1/2 mutations contribute to serous cancer risk are unclear. While p53 signatures occur at roughly equal frequency in FTs from BRCA1/2 mutation carriers and controls, incidental STIC lesions are ~40-fold more common in BRCA1/2 mutation carriers , . The transition from p53 signature to STIC appears to require inactivation of the functional BRCA allele . However, some literature suggests that BRCA mutations contribute to molecular alterations prior to loss of heterozygosity (LOH), as non-tumorigenic, haplodeficient breast and ovarian epithelial cell lines exhibit evidence of genomic instability and an altered response to replication stress – . Prior molecular profiling studies of FTs from BRCAm carriers also appear to support a haploinsufficient mechanism for BRCA -associated serous cancer risk. Transcriptomic studies in normal FTs have identified large numbers of differentially expressed genes between BRCAm and control groups – as well as an altered transcriptional response to post-ovulatory microenvironmental stress . Similarly, DNA methylation analysis of normal FTs from BRCAm carriers and controls posited that germline BRCA1/2 mutations contribute to elevated ovarian cancer risk via epigenetic reprogramming at the HOXC4 locus . Furthermore, a BRCAm-associated DNA methylation signature from normal blood was predictive of breast cancer incidence and survival , and a BRCA1/2 -specific proteomic signature from proximal liquid biopsy was shown to be sensitive at detecting high-grade ovarian cancer . In addition, BRCA1 /2 germline mutations have been reported to be associated with accelerated aging in a host of normal tissues . While this body of literature suggests that even heterozygous BRCA1/2 mutations are associated with molecular alterations that impact cancer risk, these studies are all limited in sample size. Most BRCA-related studies age-match with fewer also matching on menopause status , , , , . However, most studies are limited in sample size and do not consider other confounding variables such as cell composition, hormonal status, and reason for surgery, which is inherently confounded for risk-reduction in the BRCA1/2 populations. Here we report results from an integrated transcriptomic, epigenomic, and proteomic analysis of histologically normal FT fimbria from BRCA1/2 carriers and controls, testing the hypothesis that there are indeed molecular alterations in FTs of carriers that contribute to field cancerization and serous cancer risk. Our analysis focuses on the fimbrial region of the FT, where serous cancers are thought to arise. We address the limitations of prior molecular profiling studies by controlling for covariates known to influence the molecular and cellular landscape of the FT. Namely, our results demonstrate that the molecular profiles of FT fimbria from BRCA1/2 mutation carriers and controls are not demonstrably different when cell composition and reason for surgery are accounted for. The data generated from over 100 FTs provides the research community with the first integrated, multi-omic FT data resource annotated with key epidemiologic and reproductive covariates. Our study design allows us to define cellular and molecular differences in the FT associated with non-BRCAm serous cancer risk factors including hormone status and age. Characteristics of the sample cohort We obtained full-thickness FTs from formalin-fixed paraffin-embedded blocks collected less than 3 years prior. Only histologically normal tubes defined using standard clinical workflows and following review by a gynecologic pathologist were selected (i.e., no STIC lesions). The SEE-FIM protocol, including p53 and Ki67 IHC where indicated, was followed for all BRCA1/2 -mutated patients. Our initial focus was comparing heterozygous BRCA1/2 pathogenic mutation carriers (BRCAm) to a control cohort without mutation (non-BRCAm). As such, the distal ends of 125 FTs were collected (70 non-BRCAm, 28 BRCA1 , and 27 BRCA2 ; Fig. ; Table ; Supplementary Data ). The BRCAm group includes cancer-free patients with a known pathogenic mutation in BRCA1 or BRCA2 based on germline sequencing performed for clinical indications at a CLIA-approved laboratory (Supplementary Data ). The non-BRCAm group includes patient with neither BRCA1 nor BRCA2 mutations on clinical testing ( n = 3) and patients who did not meet clinical criteria for genetic testing ( n = 67). Following processing and QC filtering, we had high-quality data for 105 FTs (Table ; Fig. ). QC metrics did not differ by BRCA group (Supplementary Fig. ). FTs were collected from a diverse patient population and annotated with clinical information such as age at time of salpingectomy, menopause status, and surgical indication (Table ; Supplementary Data ). Other clinical data available included number of pregnancies, days to last menstrual period, endometrium state from pathology, and information on contraception use. There were no significant differences in age nor menopause status between germline BRCA groups (Table ; Fig. ). Efforts were made to recruit a diverse cohort with regard to self-reported race (Fig. ; Table ). Many non-BRCAm FTs came from elective salpingectomy at the time of cesarean section; these FTs were designated as the postpartum group (Table ). In BRCAm patients we confirmed clinical germline variant annotations by directly observing the alternate allele or deletion in our whole-genome bisulfite sequencing data. The clinical BRCA mutations were all known pathogenic variants, being classified as frameshift, nonsense, missense, splice or large deletions (Fig. ; Supplementary Data ). This orthogonal validation confirmed the clinical heterozygous variants, and demonstrated that there was no sample swap for the molecular profiling workflows, which is a recognized problem for large genomic studies . Variant calling from WGBS data did not identify any known pathogenic BRCA1/2 mutations in the non-BRCAm FTs. Tissue-type composition dominates the molecular profiles of the Fallopian tube We expected cell type composition to be a main driver for variation in the molecular profiles of bulk FTs. Therefore, we first estimated stromal fraction using DNA methylation level at the MIR200CHG (HGNC:53161) promoter (Supplementary Fig. ). This promoter region (chr12:6962650-6964102) is fully methylated in mesenchymal and unmethylated in epithelial cells. Therefore, methylation fraction at this promoter is directly proportional to stroma:epithelial cell type composition within bulk samples , . In addition, two experienced pathologists independently estimated the non-epithelial cell fraction from hematoxylin and eosin (H&E) stained sections in 13 randomly selected FTs. This further validated MIR200CHG methylation level as an accurate epithelial:stromal indicator (rho = 0.81; P = 7.1e-4; Supplementary Fig. ). RNA-expression levels for key epithelial/mesenchymal markers correlated well with the DNA methylation-based estimate (Supplementary Fig. ). Based on the DNA-methylation method, stromal fraction in our samples ranged from 24% to 89% with a median of 54%. The mean epithelial fraction did not significantly differ by BRCA status, despite being slightly higher in BRCAm FTs ( F test P = 0.13; Supplementary Fig. ; Table ). The stromal fraction increases with age, indicating a loss of FT epithelium relative to the stromal compartment with aging (rho = 0.42; P = 0.0007; Supplementary Fig. ). Stromal fraction, age, postpartum, and BRCA status could all be independently associated with principal components (PCs) from one or more -omics assays (Supplementary Fig. ). We tested these variables together in general linear models to see what explains PC1 and 2 (Supplementary Data ). Overall, stromal fraction (as measured from DNA) was the strongest explanatory variable for both DNA methylation PCs, as well as PC2 for both the transcriptome and the proteome (Supplementary Fig. ; Supplementary Data ). The transcriptome and proteome PC1s were most strongly associated with being postpartum or not postpartum. RNA PC1 was also associated to BRCA status, but we later show this is due to postpartum status and other surgical indications which are confounded with BRCA status. BRCA status is also associated with DNA methylation PC1, but not when adjusted for both stroma and age (Supplementary Data ). Overall, cell composition is the strongest explainer of variation with postpartum, age, and surgical indication also being important. Differential methylation analysis between BRCAm and non-BRCAm groups Overall, the measured DNA methylation patterns at heterochromatin, transcriptional start sites, and CTCF binding sites are consistent with expectations, including clear nucleosome positioning around CTCF binding sites, indicating high data quality (Fig. ). However, there are no differences in global DNA methylation at these sites based on BRCA status. Similarly, multiscale representation previously used to examine DNA methylation at 10kb-10Mb scales in normal tissue samples , showed no visible changes (Fig. ). To complement these large-scale analyses, we also performed differential methylated region (DMR) calling at base resolution, in search of any loci-specific differences. We tested various contrasts (Table ) adjusting for key covariates. As postpartum status (only present in the non-BRCAm group) appeared to have a major impact on the molecular profiles (Supplementary Data ; Supplementary Fig. ), we excluded postpartum FTs when comparing BRCA groups to avoid confounding (Table ). In a model adjusting exclusively for cell composition, no regions had an adjusted P < 0.05. We further examined known loci with DNA hypermethylation in either STIC lesions or high-grade serous ovarian cancer , . No DNA hypermethylation at these loci was observed in any BRCAm FTs (Fig. ). A prior study reported epigenomic reprogramming in BRCAm FTs driven by HOXC4-mediated AID over-expression in fimbriae. We do not observe any difference in AID expression in fimbriae between BRCAm and non-BRCAm FTs; AID expression is very low in all groups (Supplementary Fig. ). HOXC4 methylation in fimbriae is not different between BRCAm and non-BRCAm, but HOXC4 expression decreases with increasing percent stroma, suggesting epithelial expression (Supplementary Fig. ). HOXC4 expression was not different by BRCA status when adjusting for cellular composition. Other genes previously associated with BRCA1 variants in histologically normal FTs , also show strong correlations to cell composition (Supplementary Fig. ), and do not distinguish BRCAm samples in our data (Supplementary Fig. ). Differences in RNA and protein expression between BRCAm and Non-BRCAm Fallopian tubes We tested the same models used for differential methylation to find BRCAm—non-BRCAm differences using both RNA and protein expression (Table ). We found 159 differentially expressed genes (DEGs) and 0 differentially expressed proteins (DEPs) when adjusting for stroma fraction as well as excluding postpartum FTs (Supplementary Data ). Of note, accounting for stroma and postpartum samples removed 95.5% of DEGs and all 90 DEPs found in the univariate BRCA model (Fig. ; Supplementary Fig. ; Supplementary Data ). Genes lower in BRCAm were overwhelmingly immune-related immunoglobulin (IG) genes (Fig. ). However, the difference appeared to be driven by a subset of non-BRCAm samples. Unlike FTs from the BRCAm cases, which exclusively came from patients electing risk-reducing surgery, many of the non-BRCAm cases presented with clinically significant pathology including uterine fibroids, endometriosis, adenomyosis, and benign ovarian masses that can be associated with a local inflammatory response, or simply increase of vasculature (Fig. ; Table ). Thus, the observed difference in immune activity is more likely associated with these benign conditions in only a subset of non-BRCAm cases, rather than immune downregulation in the BRCAm cases per se (Fig. ). Only two non-BRCAm FTs came from non-postpartum elective sterilization, which are expected to better represent normal states. Indeed, these two FTs did not show increased immune expression, and were similar to the risk-reduction BRCAm samples. Interestingly, many postpartum FTs also had increased immune gene expression, possibly due to increased microvasculature, consistent with increased endothelial signature in this group (Fig. ). A handful of DEGs upregulated in BRCAm were primarily mitochondria-encoded genes (Fig. ), but their fold change in BRCAm was low (fold change less than 1.5; Fig. ). Protein products were only detected for 25 of the 159 DEGs, consistent with the very low expression level for many of these genes (Supplementary Fig. ). When these 25 proteins were examined, they reproduced the immune upregulation in a subset of non-BRCAm FTs (Fig. ). Despite significant associations between gene expression and protein product (Supplementary Fig. ), differential gene expression patterns beyond immunoglobins were not reproduced in the protein data (Fig. ). There was no difference in BRCA1 (Fig. ) nor BRCA2 (Fig. ) mRNA abundance between the non-BRCAm and BRCAm cases. Coverage was not high enough in the transcriptomics data to confidently quantify allelic expression. Neither BRCA1 nor BRCA2 were detected in the proteomics data. RNA expression of both BRCA1 and BRCA2 was highest in high-epithelial FTs (Supplementary Fig. ). Non-BRCAm related factors impact the cellular composition of primary human Fallopian tubes Several recent studies have used single-cell technologies to obtain a detailed molecular profile of the cell types present in normal FTs – , including extensive markers for stromal, epithelial, and immune cell types (Fig. ). We used the expression level of consensus marker genes (Supplementary Data ) to define the cellular composition landscape in our bulk samples, and evaluate how they change with important covariates, especially age, menopause status, pregnancy, and menstrual phase. As shown with DNA methylation data, the epithelial cell fraction dropped around and after menopause. Postmenopausal FTs consisted of mostly stroma and atrophic epithelium. Postpartum FTs and postmenopausal FTs were both strongly depleted for ciliated epithelial markers. Interestingly, while postpartum FTs, like the older postmenopausal FTs, were depleted of canonical epithelial markers such as EPCAM , they uniquely showed expression of KRT8/18/19 and TMEM190 , along with a subset of the secretory cell markers (e.g., KRT7, PAX8 , and MSLN ), but not others ( CSMD1 and OVGP1 ). Postpartum FTs were also strongly depleted of ESR1 and PGR expression, presumably due to high circulating hormone levels (Fig. ). In contrast, postmenopausal FTs still expressed these hormonal receptors. Postpartum FTs also showed an increase in endothelial cell marker expression. Based on these markers, pre-menopausal FTs clustered into two main groups: stromal-rich and epithelium-rich, with the epithelium-rich FTs further split into two groups, strongly associated the menstrual cycle state of patient-matched endometrium: proliferative/follicular, secretory/luteal, or inactive, for those patients where endometrium tissue was available (Fig. ). A subset of the putative luteal phase FTs that were also high in stroma grouped with the inactive samples. These samples exhibited high pericyte, muscle, and fibroblast marker expression, and also appeared to have more immune cells. Interestingly, the secretory epithelial markers themselves clustered into two groups: those expressed in the luteal phase, and those expressed in the follicular phase (Fig. ). An expanded secretory epithelial marker panel obtained from a high-resolution single cell method , which included well-known canonical markers such as PAX8 , MUC1 and OVGP1 , also clustered into these two groups (Fig. ). This split by menstrual cycle phase in the normal FT is consistent with expression of secretory cell markers in normal endometrium from both phases (Fig. , left). Of these previously reported markers, secretory cells of the follicular (proliferative) phase had higher levels of OVGP1 , PLCB1 , FMOD , PODXL , ANO1 , and PKHD1L1 , while those of the luteal (secretory) phase had higher levels of MUC1 , C3 , and PAX8 (Fig. ). We contrasted gene expression in premenopausal luteal phase FTs vs. premenopausal follicular phase FTs and found that expression differences mirror those previously shown in cycling endometrium (rho = 0.26; Pval < 2.2e-16; Supplementary Fig. ; Supplementary Data ). As seen in the endometrium, the secretory/luteal phase was associated with TGF-beta signaling, ECM-receptor interaction, as well as the Notch pathway, while the proliferative/follicular phase showed upregulation of genes involved in oxidative phosphorylation and cell cycle (Supplementary Fig. ). Based on these results, we categorized the premenopause FTs into four clusters: (1) follicular, (2) luteal, (3) inactive/stroma-rich, and (4) postpartum. Postmenopause samples constitute a fifth major cluster. H&E staining confirmed the presence of the stroma-rich subset identified by RNA-seq and protein data in the inactive samples (Fig. ). Postpartum samples also exhibit substantially higher stromal content compared to other pre-menopause FTs (Fig. ). Both postpartum and inactive clusters appear to have more blood vessels compared to postmenopause samples (Fig. ), consistent with the molecular data (Fig. ). We identified markers for the five major FT clusters independently in RNA (7495 unique genes at 5% FDR; Supplementary Fig. ; Supplementary Data ) and protein (936 unique proteins at 5% FDR; Supplementary Fig. ; Supplementary Data ). Many genes had FDR < 0.05 for >1 group, so markers were further grouped by their intersects (Supplementary Fig. ). Postpartum samples had the most unique markers across data types, followed by the luteal and follicular clusters (Supplementary Fig. ). A total of 383 unique genes demarcated the same cluster (FDR < 0.05) in both RNA and protein (Supplementary Fig. , right). From these consistent markers, the top 10 by P value are shown for RNA (Fig. ) and protein (Fig. ). For markers that were only significant in one data modality and not the other (at 5% FDR level), similar trends in the other data type were visible (Supplementary Fig. ). Overall, postpartum FTs tended to be associated with higher expression of cell adhesion molecules, including laminin genes. In addition, laminin subunits showed an interesting switch between the groups. Postparum FTs were uniquely marked by LAMB1 , while LAMA4 and LAMB2 were expressed postmenopause. The inactive cluster has high stroma, so many of its markers are shared with postmenopause (Fig. ); however, many proteins (e.g., ALDH1A1 and COL6A1) were up only postmenopause. Premenopause follicular and luteal also shared a substantial fraction of their RNA and protein markers, although each showed their own robust unique gene expression program. Among others, follicular FTs were marked by OVGP1 and CPM. PreM-L was marked by ALDH5A1 and SRGAP3 (Fig. ). Interestingly, postpartum and luteal FTs also shared many markers including monoamine oxidase A (MAOA). Aging and estrogen signaling in normal Fallopian tube Some studies suggest an association between cellular aging and heterozygous germline BRCA1 /2 mutations in vitro . As DNA methylation represents a unique and powerful tool to assess biological aging, we inferred the biological age of these samples with the Horvath Clock . The inferred age from our sequencing data correlated with clinical age at the time of salpingectomy (rho = 0.55; Pval = 1.2e-9; Fig. ). Using this inferred biological age, we did not observe any difference in acceleration/deceleration of aging between the BRCA groups (Fig. ). In addition, it was previously shown that DNA methylation in a special sequence context, dubbed solo-WCGW, could be used to measure replicative history of a cell , . Similar to the Horvath clock, the solo-WCGW levels were also consistent with a lack of difference in replicative history among the BRCAm carriers and controls (Fig. ). We next investigated how aging affects estrogen signaling, as increasing ESR1 promoter methylation has been reported to be associated with age in various tissue types , . In the human FT, there does not seem to be observable increased ESR1 promoter methylation with age (rho = 0.056; Pval = 0.55; Supplementary Fig. ), although there is gain of methylation flanking the CpG island surrounding the canonical TSS and A/B promoter (as annotated in Lung et al. ) in some samples independent of age (Supplementary Fig. ). DNA methylation level of ER binding sites does increase with age (rho = 0.37; Pval = 1.0e-4), but more strongly tracks tissue composition than age (Fig. ). When both age and stroma fraction are jointly modeled, only stroma fraction was significant (Pval < 2e-16 for stroma; Pval = 0.90 for age). This suggests that ER enhancers are primarily active in the epithelium. High-epithelium samples (>70%) showed decreasing solo-WCGW methylation levels with age (Fig. ), while overall solo-WCGW level positively correlated with age (Supplementary Fig. ). This opposite trend in all samples is likely confounded by tissue composition (Fig. ; Supplementary Fig. ), as solo-WCGW methylation shows a much stronger association with stroma fraction (Supplementary Fig. ). These results suggest that the stromal cells retain their solo-WCGW methylation better than epithelial cells, indicating slower turnover. Indeed, these megabase blocks of loss of DNA methylation is evident in samples with high epithelial fraction, and not in samples with high stroma fraction (Fig. ). Together, these results suggest that the epithelial compartment is likely where estrogen signaling and cellular turnover primarily occur in the FT. We obtained full-thickness FTs from formalin-fixed paraffin-embedded blocks collected less than 3 years prior. Only histologically normal tubes defined using standard clinical workflows and following review by a gynecologic pathologist were selected (i.e., no STIC lesions). The SEE-FIM protocol, including p53 and Ki67 IHC where indicated, was followed for all BRCA1/2 -mutated patients. Our initial focus was comparing heterozygous BRCA1/2 pathogenic mutation carriers (BRCAm) to a control cohort without mutation (non-BRCAm). As such, the distal ends of 125 FTs were collected (70 non-BRCAm, 28 BRCA1 , and 27 BRCA2 ; Fig. ; Table ; Supplementary Data ). The BRCAm group includes cancer-free patients with a known pathogenic mutation in BRCA1 or BRCA2 based on germline sequencing performed for clinical indications at a CLIA-approved laboratory (Supplementary Data ). The non-BRCAm group includes patient with neither BRCA1 nor BRCA2 mutations on clinical testing ( n = 3) and patients who did not meet clinical criteria for genetic testing ( n = 67). Following processing and QC filtering, we had high-quality data for 105 FTs (Table ; Fig. ). QC metrics did not differ by BRCA group (Supplementary Fig. ). FTs were collected from a diverse patient population and annotated with clinical information such as age at time of salpingectomy, menopause status, and surgical indication (Table ; Supplementary Data ). Other clinical data available included number of pregnancies, days to last menstrual period, endometrium state from pathology, and information on contraception use. There were no significant differences in age nor menopause status between germline BRCA groups (Table ; Fig. ). Efforts were made to recruit a diverse cohort with regard to self-reported race (Fig. ; Table ). Many non-BRCAm FTs came from elective salpingectomy at the time of cesarean section; these FTs were designated as the postpartum group (Table ). In BRCAm patients we confirmed clinical germline variant annotations by directly observing the alternate allele or deletion in our whole-genome bisulfite sequencing data. The clinical BRCA mutations were all known pathogenic variants, being classified as frameshift, nonsense, missense, splice or large deletions (Fig. ; Supplementary Data ). This orthogonal validation confirmed the clinical heterozygous variants, and demonstrated that there was no sample swap for the molecular profiling workflows, which is a recognized problem for large genomic studies . Variant calling from WGBS data did not identify any known pathogenic BRCA1/2 mutations in the non-BRCAm FTs. We expected cell type composition to be a main driver for variation in the molecular profiles of bulk FTs. Therefore, we first estimated stromal fraction using DNA methylation level at the MIR200CHG (HGNC:53161) promoter (Supplementary Fig. ). This promoter region (chr12:6962650-6964102) is fully methylated in mesenchymal and unmethylated in epithelial cells. Therefore, methylation fraction at this promoter is directly proportional to stroma:epithelial cell type composition within bulk samples , . In addition, two experienced pathologists independently estimated the non-epithelial cell fraction from hematoxylin and eosin (H&E) stained sections in 13 randomly selected FTs. This further validated MIR200CHG methylation level as an accurate epithelial:stromal indicator (rho = 0.81; P = 7.1e-4; Supplementary Fig. ). RNA-expression levels for key epithelial/mesenchymal markers correlated well with the DNA methylation-based estimate (Supplementary Fig. ). Based on the DNA-methylation method, stromal fraction in our samples ranged from 24% to 89% with a median of 54%. The mean epithelial fraction did not significantly differ by BRCA status, despite being slightly higher in BRCAm FTs ( F test P = 0.13; Supplementary Fig. ; Table ). The stromal fraction increases with age, indicating a loss of FT epithelium relative to the stromal compartment with aging (rho = 0.42; P = 0.0007; Supplementary Fig. ). Stromal fraction, age, postpartum, and BRCA status could all be independently associated with principal components (PCs) from one or more -omics assays (Supplementary Fig. ). We tested these variables together in general linear models to see what explains PC1 and 2 (Supplementary Data ). Overall, stromal fraction (as measured from DNA) was the strongest explanatory variable for both DNA methylation PCs, as well as PC2 for both the transcriptome and the proteome (Supplementary Fig. ; Supplementary Data ). The transcriptome and proteome PC1s were most strongly associated with being postpartum or not postpartum. RNA PC1 was also associated to BRCA status, but we later show this is due to postpartum status and other surgical indications which are confounded with BRCA status. BRCA status is also associated with DNA methylation PC1, but not when adjusted for both stroma and age (Supplementary Data ). Overall, cell composition is the strongest explainer of variation with postpartum, age, and surgical indication also being important. Overall, the measured DNA methylation patterns at heterochromatin, transcriptional start sites, and CTCF binding sites are consistent with expectations, including clear nucleosome positioning around CTCF binding sites, indicating high data quality (Fig. ). However, there are no differences in global DNA methylation at these sites based on BRCA status. Similarly, multiscale representation previously used to examine DNA methylation at 10kb-10Mb scales in normal tissue samples , showed no visible changes (Fig. ). To complement these large-scale analyses, we also performed differential methylated region (DMR) calling at base resolution, in search of any loci-specific differences. We tested various contrasts (Table ) adjusting for key covariates. As postpartum status (only present in the non-BRCAm group) appeared to have a major impact on the molecular profiles (Supplementary Data ; Supplementary Fig. ), we excluded postpartum FTs when comparing BRCA groups to avoid confounding (Table ). In a model adjusting exclusively for cell composition, no regions had an adjusted P < 0.05. We further examined known loci with DNA hypermethylation in either STIC lesions or high-grade serous ovarian cancer , . No DNA hypermethylation at these loci was observed in any BRCAm FTs (Fig. ). A prior study reported epigenomic reprogramming in BRCAm FTs driven by HOXC4-mediated AID over-expression in fimbriae. We do not observe any difference in AID expression in fimbriae between BRCAm and non-BRCAm FTs; AID expression is very low in all groups (Supplementary Fig. ). HOXC4 methylation in fimbriae is not different between BRCAm and non-BRCAm, but HOXC4 expression decreases with increasing percent stroma, suggesting epithelial expression (Supplementary Fig. ). HOXC4 expression was not different by BRCA status when adjusting for cellular composition. Other genes previously associated with BRCA1 variants in histologically normal FTs , also show strong correlations to cell composition (Supplementary Fig. ), and do not distinguish BRCAm samples in our data (Supplementary Fig. ). We tested the same models used for differential methylation to find BRCAm—non-BRCAm differences using both RNA and protein expression (Table ). We found 159 differentially expressed genes (DEGs) and 0 differentially expressed proteins (DEPs) when adjusting for stroma fraction as well as excluding postpartum FTs (Supplementary Data ). Of note, accounting for stroma and postpartum samples removed 95.5% of DEGs and all 90 DEPs found in the univariate BRCA model (Fig. ; Supplementary Fig. ; Supplementary Data ). Genes lower in BRCAm were overwhelmingly immune-related immunoglobulin (IG) genes (Fig. ). However, the difference appeared to be driven by a subset of non-BRCAm samples. Unlike FTs from the BRCAm cases, which exclusively came from patients electing risk-reducing surgery, many of the non-BRCAm cases presented with clinically significant pathology including uterine fibroids, endometriosis, adenomyosis, and benign ovarian masses that can be associated with a local inflammatory response, or simply increase of vasculature (Fig. ; Table ). Thus, the observed difference in immune activity is more likely associated with these benign conditions in only a subset of non-BRCAm cases, rather than immune downregulation in the BRCAm cases per se (Fig. ). Only two non-BRCAm FTs came from non-postpartum elective sterilization, which are expected to better represent normal states. Indeed, these two FTs did not show increased immune expression, and were similar to the risk-reduction BRCAm samples. Interestingly, many postpartum FTs also had increased immune gene expression, possibly due to increased microvasculature, consistent with increased endothelial signature in this group (Fig. ). A handful of DEGs upregulated in BRCAm were primarily mitochondria-encoded genes (Fig. ), but their fold change in BRCAm was low (fold change less than 1.5; Fig. ). Protein products were only detected for 25 of the 159 DEGs, consistent with the very low expression level for many of these genes (Supplementary Fig. ). When these 25 proteins were examined, they reproduced the immune upregulation in a subset of non-BRCAm FTs (Fig. ). Despite significant associations between gene expression and protein product (Supplementary Fig. ), differential gene expression patterns beyond immunoglobins were not reproduced in the protein data (Fig. ). There was no difference in BRCA1 (Fig. ) nor BRCA2 (Fig. ) mRNA abundance between the non-BRCAm and BRCAm cases. Coverage was not high enough in the transcriptomics data to confidently quantify allelic expression. Neither BRCA1 nor BRCA2 were detected in the proteomics data. RNA expression of both BRCA1 and BRCA2 was highest in high-epithelial FTs (Supplementary Fig. ). Several recent studies have used single-cell technologies to obtain a detailed molecular profile of the cell types present in normal FTs – , including extensive markers for stromal, epithelial, and immune cell types (Fig. ). We used the expression level of consensus marker genes (Supplementary Data ) to define the cellular composition landscape in our bulk samples, and evaluate how they change with important covariates, especially age, menopause status, pregnancy, and menstrual phase. As shown with DNA methylation data, the epithelial cell fraction dropped around and after menopause. Postmenopausal FTs consisted of mostly stroma and atrophic epithelium. Postpartum FTs and postmenopausal FTs were both strongly depleted for ciliated epithelial markers. Interestingly, while postpartum FTs, like the older postmenopausal FTs, were depleted of canonical epithelial markers such as EPCAM , they uniquely showed expression of KRT8/18/19 and TMEM190 , along with a subset of the secretory cell markers (e.g., KRT7, PAX8 , and MSLN ), but not others ( CSMD1 and OVGP1 ). Postpartum FTs were also strongly depleted of ESR1 and PGR expression, presumably due to high circulating hormone levels (Fig. ). In contrast, postmenopausal FTs still expressed these hormonal receptors. Postpartum FTs also showed an increase in endothelial cell marker expression. Based on these markers, pre-menopausal FTs clustered into two main groups: stromal-rich and epithelium-rich, with the epithelium-rich FTs further split into two groups, strongly associated the menstrual cycle state of patient-matched endometrium: proliferative/follicular, secretory/luteal, or inactive, for those patients where endometrium tissue was available (Fig. ). A subset of the putative luteal phase FTs that were also high in stroma grouped with the inactive samples. These samples exhibited high pericyte, muscle, and fibroblast marker expression, and also appeared to have more immune cells. Interestingly, the secretory epithelial markers themselves clustered into two groups: those expressed in the luteal phase, and those expressed in the follicular phase (Fig. ). An expanded secretory epithelial marker panel obtained from a high-resolution single cell method , which included well-known canonical markers such as PAX8 , MUC1 and OVGP1 , also clustered into these two groups (Fig. ). This split by menstrual cycle phase in the normal FT is consistent with expression of secretory cell markers in normal endometrium from both phases (Fig. , left). Of these previously reported markers, secretory cells of the follicular (proliferative) phase had higher levels of OVGP1 , PLCB1 , FMOD , PODXL , ANO1 , and PKHD1L1 , while those of the luteal (secretory) phase had higher levels of MUC1 , C3 , and PAX8 (Fig. ). We contrasted gene expression in premenopausal luteal phase FTs vs. premenopausal follicular phase FTs and found that expression differences mirror those previously shown in cycling endometrium (rho = 0.26; Pval < 2.2e-16; Supplementary Fig. ; Supplementary Data ). As seen in the endometrium, the secretory/luteal phase was associated with TGF-beta signaling, ECM-receptor interaction, as well as the Notch pathway, while the proliferative/follicular phase showed upregulation of genes involved in oxidative phosphorylation and cell cycle (Supplementary Fig. ). Based on these results, we categorized the premenopause FTs into four clusters: (1) follicular, (2) luteal, (3) inactive/stroma-rich, and (4) postpartum. Postmenopause samples constitute a fifth major cluster. H&E staining confirmed the presence of the stroma-rich subset identified by RNA-seq and protein data in the inactive samples (Fig. ). Postpartum samples also exhibit substantially higher stromal content compared to other pre-menopause FTs (Fig. ). Both postpartum and inactive clusters appear to have more blood vessels compared to postmenopause samples (Fig. ), consistent with the molecular data (Fig. ). We identified markers for the five major FT clusters independently in RNA (7495 unique genes at 5% FDR; Supplementary Fig. ; Supplementary Data ) and protein (936 unique proteins at 5% FDR; Supplementary Fig. ; Supplementary Data ). Many genes had FDR < 0.05 for >1 group, so markers were further grouped by their intersects (Supplementary Fig. ). Postpartum samples had the most unique markers across data types, followed by the luteal and follicular clusters (Supplementary Fig. ). A total of 383 unique genes demarcated the same cluster (FDR < 0.05) in both RNA and protein (Supplementary Fig. , right). From these consistent markers, the top 10 by P value are shown for RNA (Fig. ) and protein (Fig. ). For markers that were only significant in one data modality and not the other (at 5% FDR level), similar trends in the other data type were visible (Supplementary Fig. ). Overall, postpartum FTs tended to be associated with higher expression of cell adhesion molecules, including laminin genes. In addition, laminin subunits showed an interesting switch between the groups. Postparum FTs were uniquely marked by LAMB1 , while LAMA4 and LAMB2 were expressed postmenopause. The inactive cluster has high stroma, so many of its markers are shared with postmenopause (Fig. ); however, many proteins (e.g., ALDH1A1 and COL6A1) were up only postmenopause. Premenopause follicular and luteal also shared a substantial fraction of their RNA and protein markers, although each showed their own robust unique gene expression program. Among others, follicular FTs were marked by OVGP1 and CPM. PreM-L was marked by ALDH5A1 and SRGAP3 (Fig. ). Interestingly, postpartum and luteal FTs also shared many markers including monoamine oxidase A (MAOA). Some studies suggest an association between cellular aging and heterozygous germline BRCA1 /2 mutations in vitro . As DNA methylation represents a unique and powerful tool to assess biological aging, we inferred the biological age of these samples with the Horvath Clock . The inferred age from our sequencing data correlated with clinical age at the time of salpingectomy (rho = 0.55; Pval = 1.2e-9; Fig. ). Using this inferred biological age, we did not observe any difference in acceleration/deceleration of aging between the BRCA groups (Fig. ). In addition, it was previously shown that DNA methylation in a special sequence context, dubbed solo-WCGW, could be used to measure replicative history of a cell , . Similar to the Horvath clock, the solo-WCGW levels were also consistent with a lack of difference in replicative history among the BRCAm carriers and controls (Fig. ). We next investigated how aging affects estrogen signaling, as increasing ESR1 promoter methylation has been reported to be associated with age in various tissue types , . In the human FT, there does not seem to be observable increased ESR1 promoter methylation with age (rho = 0.056; Pval = 0.55; Supplementary Fig. ), although there is gain of methylation flanking the CpG island surrounding the canonical TSS and A/B promoter (as annotated in Lung et al. ) in some samples independent of age (Supplementary Fig. ). DNA methylation level of ER binding sites does increase with age (rho = 0.37; Pval = 1.0e-4), but more strongly tracks tissue composition than age (Fig. ). When both age and stroma fraction are jointly modeled, only stroma fraction was significant (Pval < 2e-16 for stroma; Pval = 0.90 for age). This suggests that ER enhancers are primarily active in the epithelium. High-epithelium samples (>70%) showed decreasing solo-WCGW methylation levels with age (Fig. ), while overall solo-WCGW level positively correlated with age (Supplementary Fig. ). This opposite trend in all samples is likely confounded by tissue composition (Fig. ; Supplementary Fig. ), as solo-WCGW methylation shows a much stronger association with stroma fraction (Supplementary Fig. ). These results suggest that the stromal cells retain their solo-WCGW methylation better than epithelial cells, indicating slower turnover. Indeed, these megabase blocks of loss of DNA methylation is evident in samples with high epithelial fraction, and not in samples with high stroma fraction (Fig. ). Together, these results suggest that the epithelial compartment is likely where estrogen signaling and cellular turnover primarily occur in the FT. BRCA1 and BRCA2 are among the most widely known germline cancer susceptibility genes. With the high penetrance of BRCA1 /2 mutations for breast and ovarian cancer risk, there has been intense research interest surrounding whether these germline mutations cause any genomic/epigenomic alterations before LOH or any visible aberration occurs; particularly in disease-relevant tissue types such as the FT for ovarian cancer, and breast tissue for breast cancer. Indeed, prior studies claimed prevalent epigenetic, transcriptional, and other biological differences (e.g., accelerated aging) in both FTs and peripheral blood of BRCA1/2 carriers , , , , . Our study shows that germline heterozygous BRCA1/2 mutations per se are not associated with extensive changes in the human FT, indicating that there is likely no haploinsufficiency for BRCA1/2 functions. LOH, or another form of secondary hit such as promoter DNA methylation for BRCA1 , is likely required before any further molecular alterations are incurred. This is consistent with LOH being almost always observed in primary HGSC samples from BRCA1/2 carriers , . It is further consistent with early molecular studies showing LOH in 6/6 STIC or microinvasive cancers, but not in p53 signatures nor histologically normal FT epithelium . One caveat is that we treated BRCA1 and BRCA2 pathogenic variant carriers as one group when comparing to control. However, BRCA1 and BRCA2 have distintict functions and are associated with different cancer risk . Analyses with BRCA1 and BRCA2 independently did not identify any differences to control beyond what is presented here. Another important consideration is that we are examining bulk primary samples directly from patients. One copy of BRCA1 and BRCA2 may be sufficient in most cells when cellular stress level is normal. However, these same cells could exihibit haploinsufficiency when challenged with genotoxic conditions, and this may explain findings of haploinsufficiency in vitro , . We show in this paper that cellular composition of the FT tissue has the biggest impact on readouts from bulk-omics assays. We further show that this cellular composition changes with age, menopausal status, and hormonal status (e.g., postpartum). Importantly, risk-reducing surgery following any desired reproduction is standard of care for pathogenic BRCA1/2 carriers. The FTs from non-BRCAm patients, in contrast, originate from various clinical presentations including fibroids, abnormal bleeding, or removal during c-section. These different clinical features result in a generally younger patient sample population for BRCAm, as well as a lower fraction of benign conditions affecting the reproductive tract. Similar to other BRCA-related studies , , we made an effort to age-match by purposefully collecting younger non-BRCAm FTs. However, we show that even in similarly-aged premenopausal FTs, cell composition can vary widely. Therefore, the clinical covariate of age does not fully account for the effects of cell composition, hormone status, nor surgical indication. It is also important to note the significance of surgical indications as a potential confounder in BRCA -related studies. As stated above, BRCAm carriers often have their breast or FT tissues removed for prophylactic purposes, while non-BRCAm FTs originate from benign conditions. We expect this confounding to be common in human sample based BRCA1/2 studies, as it is hard to bypass. Importantly, we show that postpartum FTs have a unique signature, and that many FTs from salpingectomies for benign uterine conditions have immune activation. In this study, we have demonstrated that the remaining transcriptomic differences between BRCAm and non-BRCAm after adjusting for epithelium/stromal composition are likely due to slightly elevated immune reaction in the non-BRCAm group. Benign uterine conditions including leiomyoma, were the most common indication for gynecologic surgery in our control group. Chronic inflammation and immune response have been associated with the pathogenesis of leiomyoma , , and likely contribute to the inflammatory molecular signature we observed in some of our control samples. Endometriosis, another common indication for benign gynecologic surgery, is considered to be a chronic inflammatory disease, characterized by dysregulation of multiple immune pathways with macrophages, NK cells, and B cells playing major roles . In light of this, results that indicate immune cell differences in BRCAm carriers need to be carefully interpreted. Confounding may be more pronounced with small sample sizes, such as in single cell or spatial datasets. Indeed, recent single cell studies on BRCA1/2 mutated FTs showed changes in immune signatures , . The similar analogy applies to BRCA-associated normal breast studies, as normal breast tissues from non-BRCAm patients is often from individuals undergoing breast reduction and will therefore often have a different cellular composition than BRCA-mutated samples predominately from prophylactic mastectomy. This confounding can also manifest in various ways in different studies. In our study, after identifying a difference in age between the BRCA groups within our initial smaller cohort, we sought to balance the age distribution by including more young, premenopausal non-BRCA individuals. Simultaneously, we requested additional FTs from Black individuals to improve representation. However, we later recognized that postpartum surgeries were disproportionately represented as a surgical indication for salpingectomy among younger, premenopausal non-BRCA individuals; as a result, many postpartum FTs were added to our cohort at this stage. With our concurrent efforts to include younger and Black non-BRCA individuals, these postpartum FTs tend to be from Black individuals. This overlap introduced an imbalance that could lead to apparent race-associated molecular differences, which were actually confounded by surgical indications in our investigations. Individual cells in the bulk tissue carrying germline BRCA1/2 mutations could have LOH and associated molecular alterations, which would be undetectable in bulk assays before substantial clonal expansion. High-depth (either read depth for bulk studies, or cell number for single-cell studies) or high-fidelity methods for clonal/subclonal studies would be needed to study these early events. While these studies will not be confounded by cell composition, this study provides some guidance on other potential confounders to consider. We showed extensive differences between follicular and luteal phase FTs in pre-menopause samples. These transcriptional variations with menstrual cycle modulation were similar to changes in the endometrium, albeit to a lower magnitude. Our study based on the correlation between cellular fraction and ER-binding enhancers shows that estrogen response is primarily limited to the epithelium in the FT, in contrast to the well-described estrogen-responsiveness of endometrial stroma . This likely explains the magnitude differences between menstrual phases in the endometrium and FT. We show that in both the endometrium and FT, canonical secretory cell markers vary with menstrual cycle, with OVGP1 expressed higher in the follicular phase, and PAX8/MUC1/KRT7 in the luteal phase. Many FT and/or endometrium-based studies use these markers to drive transgene expression , , and this phase-related difference in expression of these key genes is important to consider. While most non-pregnant pre-menopause FT samples have more epithelium than postmenopausal and postpartum samples, we identified a subset of samples that showed low epithelium content, mostly within the luteal phase. This group included all four samples in our cohort that had an associated endometrium annotated as inactive. Inactive endometrium is often associated with the continous use of oral contraceptive pills (OCP) and hormonal intrauterine device (IUD) . Therefore, we hypothesize that this could be partially due to a hormonal effect. This, if true, could explain the protective effects of OCP use on ovarian cancer. We collected contraceptive use status and history and made them available as part of the clinical data (Supplementary Data ). However, the hormonal IUD or OCP use data for these samples have not been collected in a consistent manner, making it hard to test this hypothesis. Furthermore, contraceptive use history does not explain all of these inactive cases. Gene and protein expression points to a more similar profile to postmenopause FTs, instead of the postpartum FT that are highly influenced by progesterone. The age of these patients are comparable to other premenopause patients, and are unlikely to be peri-menopausal. Another possibility is that this group may include Polycystic ovary syndrome patients . Understanding why this subset of epithelium-sparse FT fimbria occurs could be key to understanding fertility variations, and potential additional factors that contribute to differences in ovarian cancer risk. Independent of the relevance for BRCA1/2 and ovarian cancer research, this study is a comprehensive survey of FT cellular, epigenomic, transcriptomic, and proteomic landscape of histologically normal human FT, an organ with both reproductive and oncological implications. This cohort shows cellular and molecular variations associated with age, pregnancy, reproductive status, menstrual cycle phase, and surgical indication. These results are key to understanding FT biology, which in turn will inform strategies for cancer prevention and early detection. Sample collection for high and average risk Fallopian tubes The cases for this study were obtained with patient consent and study approval by the Institutional Review Board at the University of Pennsylvania. Tissues of the fimbriated and proximal portions of historically benign FTs were fixed in 10% neutral buffered formalin and processed on a Leica 300 ASP tissue processor to create FFPE blocks. The cases utilized were all females. There were 55 BRCA mutation carrier blocks and 70 average risk cases collected. Clinical data for these cases were obtained from electronic medical records (Supplementary Data ). DNA and RNA extraction and quality assessment FFPE block tissue sections (10 μm thick sections) were submitted to the VAI Pathology and Biorepository Core (PBC) in 4 batches over 4 years. Each batch was processed and submitted for sequencing at the Van Andel Institute Genomics Core. Each block had 8 total 10 μm sections cut and shipped to the PBC. Four 10 μm sections were placed an Eppendorf tube for extraction (2 tubes per block). 2 tubes (8 total 10 μm sections) were used for each extraction. All batches were processed using the Qiagen Allprep DNA/RNA FFPE kit following the manufacturer’s protocol for recovering total RNA, including small RNAs. Deparaffinization Solution was used for the deparaffinization. Quantification and qualification of nucleic acids was performed by Qubit Fluorometer and Agilent TapeStation. Extracted RNA samples with DV200 > 30 were deemed sufficient to continue library preparation and sequencing. Extracted DNA samples with an average material size of 1000 bp or greater were deemed sufficient to continue to library preparation and sequencing. Construction and sequencing of directional total RNA-seq libraries Libraries were prepared by the Van Andel Genomics Core from 500 ng of total RNA using the KAPA RNA HyperPrep Kit (Kapa Biosystems, Wilmington, MA USA). Ribosomal RNA material was reduced using the QIAseq FastSelect –rRNA HMR Kit (Qiagen, Germantown, MD, USA). RNA was sheared to 300–400 bp and was converted to cDNA. cDNA fragments were ligated to IDT for Illumina TruSeq UD Indexed adapters (Illumina Inc, San Diego CA, USA). Following adapter ligation, 8 cycles of PCR were performed. Quality and quantity of the finished libraries were assessed using a combination of Agilent DNA High Sensitivity chip (Agilent Technologies, Inc.), QuantiFluor® dsDNA System (Promega Corp., Madison, WI, USA), and Kapa Illumina Library Quantification qPCR assays (Kapa Biosystems). Individually indexed libraries were pooled and 100 bp, paired-end sequencing was performed on an Illumina NovaSeq6000 sequencer to an average depth of 50 M raw paired-reads per sample. Base calling was done by Illumina RTA3 and the output of NCS was demultiplexed and converted to FastQ format with Illumina Bcl2fastq v1.9.0. Construction and sequencing of WGBS libraries Libraries were prepared by the Van Andel Genomics Core from 200–250 ng of extracted DNA using the IDT xGen Methylation-Sequencing DNA Library Preparation Kit (IDT, Cat #10009824, Accel-NGS Methyl-Seq DNA Library kit (v3.0), Swift Biosciences, Cat. #30024). DNA was sheared following the manufacturer’s protocol to an average size of 250 bp, and sheared DNA was bisulfite converted using the EZ DNA Methylation-Gold kit (Zymo Research, Cat. #D5005) with an elution volume of 15 ul. Following adapter ligation, 6 cycles of library amplification were performed. Quality and quantity of the finished library pools were assessed using a combination of Agilent DNA High Sensitivity chip (Agilent Technologies, Inc.), and QuantiFluor® dsDNA System (Promega Corp., Madison, WI, USA). 100 bp, paired-end sequencing was performed on an Illumina NovaSeq6000 sequencer using an S4, 200 bp sequencing kit (Illumina Inc., San Diego, CA, USA) with 10% PhiX included to improve base diversity. Each sample was sequenced to a minimum raw depth of 450 M reads. Base calling was done by Illumina RTA3 and the output of NCS was demultiplexed and converted to FastQ format with Illumina Bcl2fastq v1.9.0. FFPE sample processing for proteomic analysis Each of the 155 samples (including multiple technical replicates for some samples from which final abundance values were averaged, see Supplementary Data and ) consisted of five 4 µm unstained FFPE sections placed in 2 mL Eppendorf tubes. Three 1 mm zirconium beads and 125 µL of xylenes were added to each tube. Paraffin solubilization and tissue homogenization were performed using three 30-s cycles on a Beadbeater 16 apparatus (BioSpec Products, Bartlesville, OK). 125 µL of 2X S-Trap extraction buffer (10% sodium dodecyl sulfate [SDS], 100 mM triethyl ammonium bicarbonate [TEAB]) was added to each sample. Protein solubilization was achieved by vortexing for 15 min at room temperature. Disulfide bonds were reduced at 56 °C for 20 min with 1 mM dithiothreitol. Reduced cysteines were alkylated at room temperature for 20 min with 2.25 mM iodoacetamide. Proteins were acidified with 25 µL of 12% phosphoric acid and precipitated with 1.6 mL S-Trap binding buffer (100 mM TEAB, 90% methanol). Protein precipitates were collected by centrifugation at 20,000 × g for 10 min at room temperature. 1.5 mL of the supernatants were discarded. The protein pellets were resuspended in the remaining 400 µL of supernatant and transferred to the wells of a 96-well S-Trap plate (ProtiFi, Fairport, NY). Proteins captured on the S-Trap matrix were washed once with 400 µL of 50% chloroform/50% methanol and 5 times with 400 µL of S-Trap binding buffer using a vacuum manifold for 96-well plates. The S-Trap plates were centrifuged at 800 × g for 15 s to collect residual binding buffer to the bottom of the wells and placed on the vacuum manifold for 10 s. Captured proteins were digested overnight at 37 °C using 5 µg of trypsin diluted in 125 µL of S-Trap digestion buffer (50 mM TEAB). The digested peptides were recovered in 2 mL collection plates (Waters, Milford, MA) by centrifugation at 800 × g for 1 min. A second 3-h digestion was performed using 1 µg of trypsin diluted in 100 µL of S-Trap digestion buffer. Peptides were recovered by centrifugation at 800 × g after each addition of 80 µL of S-Trap digestion buffer, 80 µL of 0.2% formic acid and 80 µL of 50% acetonitrile. Peptide fractionation (for spectral library generation) To create a pooled sample, 0.5% of each sample was combined in a single tube, dried to completion, and solubilized in 10 µL of 20 mM ammonium formate pH10 containing 2% acetonitrile. Peptides were loaded onto a 7 cm × 150 µm column packed with 5 µm XBridge BEH130 C18 beads (Waters). Peptides were fractionated in a concatenated fashion across a 60-min gradient from 2% to 35% acetonitrile using 20 mM ammonium formate pH10 as the aqueous mobile phase and directly collected into 24 conditioned Evotips Pure (Evosep, Odense, Denmark) containing 190 µL of 0.1% formic acid . Evotips were handled as recommended by the manufacturer. LC-MS/MS analysis Peptides were separated using a standardized 31-min gradient “Whisper” method (40SPD) with 0.1% FA and 0.1% FA/99.9% ACN as the mobile phases using an Evosep One liquid chromatography system (Evosep) connected to a 15 cm × 75 µm column packed with 1.9 µm ReproSil-Pur 120 C18-AQ reverse phase beads (Dr. Maisch, Ammerbuch-Entringen, Germany) and introduced into a timsTOF fleX MALDI-2 mass spectrometer (Bruker, Billerica, MA). Spectral library generation: Fractionated peptides from the pooled sample were analyzed using a data dependent acquisition (DDA) strategy with the mass spectrometer executing 10 PASEF scans (charge 0–5, ramp time: 100 ms, target intensity: 14,500, intensity threshold: 1750, m/z range: 150–1300, mobility range: 0.6–1.6). Active exclusion was enabled with a release time of 0.4 min. Analysis of individual FFPE samples: Peptides from individual samples were analyzed using data independent acquisition (DIA) mode with the mass spectrometer executing 12 dia-PASEF scans with 2 isolation windows per scan as optimized using the py_diAID algorithm (m/z range: 300–1200, mobility range: 0.7–1.4). Spectral library generation The project-specific spectral library was generated from the DDA runs in the FragPipe computational environment using MSFragger v3.8 . Search parameters included lysine formylation, variable oxidation of methionine, N-terminus methionine excision and acetylation and fixed carbamidomethylation of cysteine, and all other parameters set as default. Spectra were searched against a human UniProt database of reviewed sequences (downloaded on 8/8/2023) including common contaminants and using reversed sequences as decoys. MSBooster was used to predict spectra and retention times. Peptide-spectral matches were rescored using default parameters in Percolator and filtered to 1% false discovery rate. Protein hits were validated using default parameters in ProteinProphet . The spectral library was generated using default parameters. Protein identification and quantitation Protein identification and quantification was performed on each individual sample analyzed in DIA mode using DIA-NN (v 1.8.2 beta 8). The search was performed against the spectral library generated in FragPipe (see above) and included the following parameters: lysine formylation, variable oxidation of methionine, N-terminus methionine excision and fixed carbamidomethylation of cysteine, with 2 missed tryptic cleavages. Additional parameter ranges were specified: peptide length: 6–40, charge state: 2–5, precursor m/z: 300–1700, fragment ion: 200–1700. Search results were re-annotated using the UniProt database used for the MSFragger search. The parameters for the quantitation algorithm were set as follows: mass accuracy: 10 ppm, MS1 accuracy: 15 ppm with match between runs (MBR) enabled, allowed isotopologues, shared spectra exclusion. Gene-based protein inference, double-pass neural network with robust LC quantification, retention time dependent normalization and smart profiling were used as additional parameters. Quantitation values for each protein were used in analyses. Histology Histologic slides from the FFPE samples were imaged on the 3DHistech Pannoramic MIDI II (catalog number MI2BGHM00010070001) using the Pannoramic Scanner Software Version 3.0.3 at 86X magnification. SVS image files were then imported to QuPath v0.5.1 to crop and add the scale bars. RNAseq data analysis Expression counts from RNA-seq sequencing relied on a pipeline based on STAR v2.7.1 and RSEM v1.3.3 . Specifically, prior to alignment, reads were quality filtered using fastp with default settings . Next, reads were aligned to an rRNA reference using bwa with the putative rRNA mapped reads removed from downstream analyses . Using STAR, the remaining non-rRNA reads were mapped to the GRCh38.90 reference. We utilized the expected counts for genes from RSEM. We excluded samples with a library size of fewer than 8 million reads or less than a 70% alignment rate. We normalized counts using the weighted trimmed mean of M values method from the Bioconductor edgeR v4.2.0 package . We then excluded genes with minimum count less than 100 across 94 samples with expression data. This resulted in 27,303 features for differential testing. We corrected for batch effect while preserving any BRCA effect in the RNA counts using the Bayes model in the ComBat_seq function of the R package sva v3.5.0 , . P values were derived from a quasi-likelihood F test following data fitting using a negative binomial generalized log-linear model as implemented with edgeR functions glmQLFit and glmQLFTest. Marker Identification RNA and protein markers were identified using findMarkers function from scran R package v1.32.0 . Markers were found using the log of the expression values with the following options: pval.type = “some”, direction = “up”, test.type = “t”, and min.propotion = 0.5. Upset plots were made with the complex upset v1.3.3 . DNA methylation data analysis FASTQ files were aligned using BISulfite-seq CUI Toolkit (BISCUIT) v1.2.0 wrapped within a Snakemake workflow available on github: https://github.com/huishenlab/Biscuit_Snakemake_Workflow , . This was run using Snakemake v7.25.0+. This workflow also performs several other quality control steps. As recommended with the Accel-NGS® Methyl-Seq DNA Library Kit, we first trimmed 15 nucleotides from the end of R2 as these positions are not fully converted. Next, trimmed reads were put through our “biscuit sifter” step which aligns read pairs with BISCUIT and subsequently removes PCR duplicates before outputting a sorted, aligned, and indexed BAM file . BISCUIT is then used again to ultimately generate a BED file with methylation information. This BED file is read into R using the biscuiteer R package as a bsseq object. This data was then used for analyses including differential methylation, plotting heatmaps, and estimation of stromal composition. The Snakemake workflow also outputs a region centered binned average methylation from input bed files. This takes the regions, centers them, and calculates the average of 200 bp windows in sliding steps of 5 out to 1000 bp from the region center (specified in the workflow config). This was used to show methylation around features of interest (in our case, CTCF binding sites, transcriptional start sites, and heterochromatin). The Snakemake workflow also generates a binned average methylation matrix using 10,000 bp bins. Finally, a second Snakemake workflow was used to generate the multiscale methylation plot ( https://github.com/huishenlab/multiscale_methylation_plot_pipeline ). This averages DNA methylation in 10 kb to 10 Mb bins and visualized using the bisplotti R package available on github ( https://github.com/huishenlab/bisplotti ). In all BRCAm aligned libraries, except two where the exact clinical mutation was not known, we had sufficient coverage in the WGBS to manually confirm the clinical mutation. We did not observe the germline mutations in the RNAseq data due to low transcript abundance. WGBS coverage at BRCA1 averaged 18x and BRCA2 13x. We identified high quality heterozygous SNPs in the WGBS data that were also present in NCBI Clinical Variant Database ( https://ftp.ncbi.nlm.nih.gov/pub/clinvar/vcf_GRCh38/ ) using BISCUIT; no variants of known pathogenicity were identified in the non-BRCAm cohort. Differential gene expression A negative binomial generalized log-linear model was then fit to the filtered and batch corrected count data with edgeR v4.2.0 using the weighted trimmed mean of M values to normalize for library size and composition biases . Groups were compared with and without covariates as indicated in Table . P values were estimated using empirical Bayes quasi-likelihood F-tests and adjusted for multiple comparisons using the Benjamini-Hochberg (BH) method; adjusted P values less than 0.05 were considered significant. Differential methylation To identify regions of differential methylation between non-BRCAm patients and those with germline BRCA1 or BRCA2 mutations, we used the R package DMRcate v2.12.0 . Our design matrix was generated using the edgeR function modelMatrixMeth . Subseqently, the contrast matrix, design matrix, and a bsseq object with all CpGs for all samples in any specific contrast was input into the DMRcate preprocessing function sequencing.annotate with options all.cov = TRUE and FDR = 0.05. Next, DMRcate::dmrcate was called with default options except min.cpgs=1 and a bandwidth scaling factor of two. The differentially methylated regions were then output with the extractRanges functions. Differential protein expression We used the R package DEP (Differential Enrichment and Analysis of Proteomics Data) v1.20.0 for differential protein expression analysis . Variance stabilization normalization was done with the function vsn. Two samples that had greater than 50% missing data were removed. Technical replicates were averaged. Filtered for proteins that are identified in 2 out of 3 replicates of at least one condition when testing for differential abundance. Differential testing was performed using the test_diff function from the DEP R package, which applies protein-wise linear models and empirical Bayes statistics via limma, with p values derived from moderated t tests. FDR correction was done with BH method. In total, we assayed 5957 unique proteins with 2376 fragments having values for all samples. Statistical analyses and data visualization The Spearman’s rank correlation test was used for all reported rho and P values for pairwise correlations. Spearman’s test was implemented with the cor.test () function from the R package stats v4.2.2. Heatmaps were generated using the R package ComplexHeatmap v2.20.2 with annotations as shown. Pairwise group comparisons were done using the pairwise_wilcox_test function from the rstatix v0.7.2 package. ANOVA tests were run with the rstatix v0.7.2 anova_test function. Lollipop plots of mutations in BRCA1 and BRCA2 were then made using ProteinPaint . BRCA1 domains were derived from the ProteinPaint Web interface and BRCA2 domains were obtained from UniProt . All other plots were made with ggplot2 v3.4.1+. Gene ontology term enrichment was done with clusterProfiler v4.12.0 using the enrichGO function with p values from Fisher exact test corrected for multiple testing using the BH method. Reporting summary Further information on research design is available in the linked to this article. The cases for this study were obtained with patient consent and study approval by the Institutional Review Board at the University of Pennsylvania. Tissues of the fimbriated and proximal portions of historically benign FTs were fixed in 10% neutral buffered formalin and processed on a Leica 300 ASP tissue processor to create FFPE blocks. The cases utilized were all females. There were 55 BRCA mutation carrier blocks and 70 average risk cases collected. Clinical data for these cases were obtained from electronic medical records (Supplementary Data ). FFPE block tissue sections (10 μm thick sections) were submitted to the VAI Pathology and Biorepository Core (PBC) in 4 batches over 4 years. Each batch was processed and submitted for sequencing at the Van Andel Institute Genomics Core. Each block had 8 total 10 μm sections cut and shipped to the PBC. Four 10 μm sections were placed an Eppendorf tube for extraction (2 tubes per block). 2 tubes (8 total 10 μm sections) were used for each extraction. All batches were processed using the Qiagen Allprep DNA/RNA FFPE kit following the manufacturer’s protocol for recovering total RNA, including small RNAs. Deparaffinization Solution was used for the deparaffinization. Quantification and qualification of nucleic acids was performed by Qubit Fluorometer and Agilent TapeStation. Extracted RNA samples with DV200 > 30 were deemed sufficient to continue library preparation and sequencing. Extracted DNA samples with an average material size of 1000 bp or greater were deemed sufficient to continue to library preparation and sequencing. Libraries were prepared by the Van Andel Genomics Core from 500 ng of total RNA using the KAPA RNA HyperPrep Kit (Kapa Biosystems, Wilmington, MA USA). Ribosomal RNA material was reduced using the QIAseq FastSelect –rRNA HMR Kit (Qiagen, Germantown, MD, USA). RNA was sheared to 300–400 bp and was converted to cDNA. cDNA fragments were ligated to IDT for Illumina TruSeq UD Indexed adapters (Illumina Inc, San Diego CA, USA). Following adapter ligation, 8 cycles of PCR were performed. Quality and quantity of the finished libraries were assessed using a combination of Agilent DNA High Sensitivity chip (Agilent Technologies, Inc.), QuantiFluor® dsDNA System (Promega Corp., Madison, WI, USA), and Kapa Illumina Library Quantification qPCR assays (Kapa Biosystems). Individually indexed libraries were pooled and 100 bp, paired-end sequencing was performed on an Illumina NovaSeq6000 sequencer to an average depth of 50 M raw paired-reads per sample. Base calling was done by Illumina RTA3 and the output of NCS was demultiplexed and converted to FastQ format with Illumina Bcl2fastq v1.9.0. Libraries were prepared by the Van Andel Genomics Core from 200–250 ng of extracted DNA using the IDT xGen Methylation-Sequencing DNA Library Preparation Kit (IDT, Cat #10009824, Accel-NGS Methyl-Seq DNA Library kit (v3.0), Swift Biosciences, Cat. #30024). DNA was sheared following the manufacturer’s protocol to an average size of 250 bp, and sheared DNA was bisulfite converted using the EZ DNA Methylation-Gold kit (Zymo Research, Cat. #D5005) with an elution volume of 15 ul. Following adapter ligation, 6 cycles of library amplification were performed. Quality and quantity of the finished library pools were assessed using a combination of Agilent DNA High Sensitivity chip (Agilent Technologies, Inc.), and QuantiFluor® dsDNA System (Promega Corp., Madison, WI, USA). 100 bp, paired-end sequencing was performed on an Illumina NovaSeq6000 sequencer using an S4, 200 bp sequencing kit (Illumina Inc., San Diego, CA, USA) with 10% PhiX included to improve base diversity. Each sample was sequenced to a minimum raw depth of 450 M reads. Base calling was done by Illumina RTA3 and the output of NCS was demultiplexed and converted to FastQ format with Illumina Bcl2fastq v1.9.0. Each of the 155 samples (including multiple technical replicates for some samples from which final abundance values were averaged, see Supplementary Data and ) consisted of five 4 µm unstained FFPE sections placed in 2 mL Eppendorf tubes. Three 1 mm zirconium beads and 125 µL of xylenes were added to each tube. Paraffin solubilization and tissue homogenization were performed using three 30-s cycles on a Beadbeater 16 apparatus (BioSpec Products, Bartlesville, OK). 125 µL of 2X S-Trap extraction buffer (10% sodium dodecyl sulfate [SDS], 100 mM triethyl ammonium bicarbonate [TEAB]) was added to each sample. Protein solubilization was achieved by vortexing for 15 min at room temperature. Disulfide bonds were reduced at 56 °C for 20 min with 1 mM dithiothreitol. Reduced cysteines were alkylated at room temperature for 20 min with 2.25 mM iodoacetamide. Proteins were acidified with 25 µL of 12% phosphoric acid and precipitated with 1.6 mL S-Trap binding buffer (100 mM TEAB, 90% methanol). Protein precipitates were collected by centrifugation at 20,000 × g for 10 min at room temperature. 1.5 mL of the supernatants were discarded. The protein pellets were resuspended in the remaining 400 µL of supernatant and transferred to the wells of a 96-well S-Trap plate (ProtiFi, Fairport, NY). Proteins captured on the S-Trap matrix were washed once with 400 µL of 50% chloroform/50% methanol and 5 times with 400 µL of S-Trap binding buffer using a vacuum manifold for 96-well plates. The S-Trap plates were centrifuged at 800 × g for 15 s to collect residual binding buffer to the bottom of the wells and placed on the vacuum manifold for 10 s. Captured proteins were digested overnight at 37 °C using 5 µg of trypsin diluted in 125 µL of S-Trap digestion buffer (50 mM TEAB). The digested peptides were recovered in 2 mL collection plates (Waters, Milford, MA) by centrifugation at 800 × g for 1 min. A second 3-h digestion was performed using 1 µg of trypsin diluted in 100 µL of S-Trap digestion buffer. Peptides were recovered by centrifugation at 800 × g after each addition of 80 µL of S-Trap digestion buffer, 80 µL of 0.2% formic acid and 80 µL of 50% acetonitrile. To create a pooled sample, 0.5% of each sample was combined in a single tube, dried to completion, and solubilized in 10 µL of 20 mM ammonium formate pH10 containing 2% acetonitrile. Peptides were loaded onto a 7 cm × 150 µm column packed with 5 µm XBridge BEH130 C18 beads (Waters). Peptides were fractionated in a concatenated fashion across a 60-min gradient from 2% to 35% acetonitrile using 20 mM ammonium formate pH10 as the aqueous mobile phase and directly collected into 24 conditioned Evotips Pure (Evosep, Odense, Denmark) containing 190 µL of 0.1% formic acid . Evotips were handled as recommended by the manufacturer. Peptides were separated using a standardized 31-min gradient “Whisper” method (40SPD) with 0.1% FA and 0.1% FA/99.9% ACN as the mobile phases using an Evosep One liquid chromatography system (Evosep) connected to a 15 cm × 75 µm column packed with 1.9 µm ReproSil-Pur 120 C18-AQ reverse phase beads (Dr. Maisch, Ammerbuch-Entringen, Germany) and introduced into a timsTOF fleX MALDI-2 mass spectrometer (Bruker, Billerica, MA). Spectral library generation: Fractionated peptides from the pooled sample were analyzed using a data dependent acquisition (DDA) strategy with the mass spectrometer executing 10 PASEF scans (charge 0–5, ramp time: 100 ms, target intensity: 14,500, intensity threshold: 1750, m/z range: 150–1300, mobility range: 0.6–1.6). Active exclusion was enabled with a release time of 0.4 min. Analysis of individual FFPE samples: Peptides from individual samples were analyzed using data independent acquisition (DIA) mode with the mass spectrometer executing 12 dia-PASEF scans with 2 isolation windows per scan as optimized using the py_diAID algorithm (m/z range: 300–1200, mobility range: 0.7–1.4). The project-specific spectral library was generated from the DDA runs in the FragPipe computational environment using MSFragger v3.8 . Search parameters included lysine formylation, variable oxidation of methionine, N-terminus methionine excision and acetylation and fixed carbamidomethylation of cysteine, and all other parameters set as default. Spectra were searched against a human UniProt database of reviewed sequences (downloaded on 8/8/2023) including common contaminants and using reversed sequences as decoys. MSBooster was used to predict spectra and retention times. Peptide-spectral matches were rescored using default parameters in Percolator and filtered to 1% false discovery rate. Protein hits were validated using default parameters in ProteinProphet . The spectral library was generated using default parameters. Protein identification and quantification was performed on each individual sample analyzed in DIA mode using DIA-NN (v 1.8.2 beta 8). The search was performed against the spectral library generated in FragPipe (see above) and included the following parameters: lysine formylation, variable oxidation of methionine, N-terminus methionine excision and fixed carbamidomethylation of cysteine, with 2 missed tryptic cleavages. Additional parameter ranges were specified: peptide length: 6–40, charge state: 2–5, precursor m/z: 300–1700, fragment ion: 200–1700. Search results were re-annotated using the UniProt database used for the MSFragger search. The parameters for the quantitation algorithm were set as follows: mass accuracy: 10 ppm, MS1 accuracy: 15 ppm with match between runs (MBR) enabled, allowed isotopologues, shared spectra exclusion. Gene-based protein inference, double-pass neural network with robust LC quantification, retention time dependent normalization and smart profiling were used as additional parameters. Quantitation values for each protein were used in analyses. Histologic slides from the FFPE samples were imaged on the 3DHistech Pannoramic MIDI II (catalog number MI2BGHM00010070001) using the Pannoramic Scanner Software Version 3.0.3 at 86X magnification. SVS image files were then imported to QuPath v0.5.1 to crop and add the scale bars. Expression counts from RNA-seq sequencing relied on a pipeline based on STAR v2.7.1 and RSEM v1.3.3 . Specifically, prior to alignment, reads were quality filtered using fastp with default settings . Next, reads were aligned to an rRNA reference using bwa with the putative rRNA mapped reads removed from downstream analyses . Using STAR, the remaining non-rRNA reads were mapped to the GRCh38.90 reference. We utilized the expected counts for genes from RSEM. We excluded samples with a library size of fewer than 8 million reads or less than a 70% alignment rate. We normalized counts using the weighted trimmed mean of M values method from the Bioconductor edgeR v4.2.0 package . We then excluded genes with minimum count less than 100 across 94 samples with expression data. This resulted in 27,303 features for differential testing. We corrected for batch effect while preserving any BRCA effect in the RNA counts using the Bayes model in the ComBat_seq function of the R package sva v3.5.0 , . P values were derived from a quasi-likelihood F test following data fitting using a negative binomial generalized log-linear model as implemented with edgeR functions glmQLFit and glmQLFTest. RNA and protein markers were identified using findMarkers function from scran R package v1.32.0 . Markers were found using the log of the expression values with the following options: pval.type = “some”, direction = “up”, test.type = “t”, and min.propotion = 0.5. Upset plots were made with the complex upset v1.3.3 . FASTQ files were aligned using BISulfite-seq CUI Toolkit (BISCUIT) v1.2.0 wrapped within a Snakemake workflow available on github: https://github.com/huishenlab/Biscuit_Snakemake_Workflow , . This was run using Snakemake v7.25.0+. This workflow also performs several other quality control steps. As recommended with the Accel-NGS® Methyl-Seq DNA Library Kit, we first trimmed 15 nucleotides from the end of R2 as these positions are not fully converted. Next, trimmed reads were put through our “biscuit sifter” step which aligns read pairs with BISCUIT and subsequently removes PCR duplicates before outputting a sorted, aligned, and indexed BAM file . BISCUIT is then used again to ultimately generate a BED file with methylation information. This BED file is read into R using the biscuiteer R package as a bsseq object. This data was then used for analyses including differential methylation, plotting heatmaps, and estimation of stromal composition. The Snakemake workflow also outputs a region centered binned average methylation from input bed files. This takes the regions, centers them, and calculates the average of 200 bp windows in sliding steps of 5 out to 1000 bp from the region center (specified in the workflow config). This was used to show methylation around features of interest (in our case, CTCF binding sites, transcriptional start sites, and heterochromatin). The Snakemake workflow also generates a binned average methylation matrix using 10,000 bp bins. Finally, a second Snakemake workflow was used to generate the multiscale methylation plot ( https://github.com/huishenlab/multiscale_methylation_plot_pipeline ). This averages DNA methylation in 10 kb to 10 Mb bins and visualized using the bisplotti R package available on github ( https://github.com/huishenlab/bisplotti ). In all BRCAm aligned libraries, except two where the exact clinical mutation was not known, we had sufficient coverage in the WGBS to manually confirm the clinical mutation. We did not observe the germline mutations in the RNAseq data due to low transcript abundance. WGBS coverage at BRCA1 averaged 18x and BRCA2 13x. We identified high quality heterozygous SNPs in the WGBS data that were also present in NCBI Clinical Variant Database ( https://ftp.ncbi.nlm.nih.gov/pub/clinvar/vcf_GRCh38/ ) using BISCUIT; no variants of known pathogenicity were identified in the non-BRCAm cohort. A negative binomial generalized log-linear model was then fit to the filtered and batch corrected count data with edgeR v4.2.0 using the weighted trimmed mean of M values to normalize for library size and composition biases . Groups were compared with and without covariates as indicated in Table . P values were estimated using empirical Bayes quasi-likelihood F-tests and adjusted for multiple comparisons using the Benjamini-Hochberg (BH) method; adjusted P values less than 0.05 were considered significant. To identify regions of differential methylation between non-BRCAm patients and those with germline BRCA1 or BRCA2 mutations, we used the R package DMRcate v2.12.0 . Our design matrix was generated using the edgeR function modelMatrixMeth . Subseqently, the contrast matrix, design matrix, and a bsseq object with all CpGs for all samples in any specific contrast was input into the DMRcate preprocessing function sequencing.annotate with options all.cov = TRUE and FDR = 0.05. Next, DMRcate::dmrcate was called with default options except min.cpgs=1 and a bandwidth scaling factor of two. The differentially methylated regions were then output with the extractRanges functions. We used the R package DEP (Differential Enrichment and Analysis of Proteomics Data) v1.20.0 for differential protein expression analysis . Variance stabilization normalization was done with the function vsn. Two samples that had greater than 50% missing data were removed. Technical replicates were averaged. Filtered for proteins that are identified in 2 out of 3 replicates of at least one condition when testing for differential abundance. Differential testing was performed using the test_diff function from the DEP R package, which applies protein-wise linear models and empirical Bayes statistics via limma, with p values derived from moderated t tests. FDR correction was done with BH method. In total, we assayed 5957 unique proteins with 2376 fragments having values for all samples. The Spearman’s rank correlation test was used for all reported rho and P values for pairwise correlations. Spearman’s test was implemented with the cor.test () function from the R package stats v4.2.2. Heatmaps were generated using the R package ComplexHeatmap v2.20.2 with annotations as shown. Pairwise group comparisons were done using the pairwise_wilcox_test function from the rstatix v0.7.2 package. ANOVA tests were run with the rstatix v0.7.2 anova_test function. Lollipop plots of mutations in BRCA1 and BRCA2 were then made using ProteinPaint . BRCA1 domains were derived from the ProteinPaint Web interface and BRCA2 domains were obtained from UniProt . All other plots were made with ggplot2 v3.4.1+. Gene ontology term enrichment was done with clusterProfiler v4.12.0 using the enrichGO function with p values from Fisher exact test corrected for multiple testing using the BH method. Further information on research design is available in the linked to this article. Supplementary Information Description of Additional Supplementary Files Supplementary Data 1–8 Reporting Summary Transparent Peer Review file
Clinical advances in kidney autotransplantation: a review
6632196b-8bd2-425a-b5b2-bf608c8e7a87
11796257
Surgical Procedures, Operative[mh]
Kidney transplantation can be categorized as allogeneic, autologous, and xenogeneic, depending on the source of the donor kidney. Kidney autotransplantation (KAT) was pioneered by Hardy in humans and has been successful in the treatment of upper ureteral stenosis . Subsequently, the application of KAT was expanded to encompass renal vascular lesions, tumors, and loin pain-hematuria syndrome. In a retrospective analysis of the US Nationwide Inpatient Sample database, Moghadamyeghaneh et al. identified renal artery lesions (22.7%), ureteral diseases (17%), and malignancies (14.9%) as the most prevalent surgical indications for KAT. Renal failure occurred in 10.7% of the cases, with overall mortality and complication rates of 1.3% and 46.2%, respectively. Recent advancements in the clinical use of KAT have emerged, largely owing to the widespread adoption of minimally invasive surgical techniques. This study aimed to provide an overview of the clinical developments in both the indications for and technological advancements in the field of KAT(Fig. ). Renal vascular lesions Renal artery lesions Renal artery aneurysms are uncommon, with 0.1% of the population experiencing renal artery aneurysms . Typically, renal artery aneurysms are asymptomatic . The currently recognized criteria for intervention in renal artery aneurysms include aneurysm size > 2 cm, presence in women of reproductive age, associated pain, hematuria, and drug-refractory hypertension . Notably, approximately 70% of individuals with renal artery aneurysms also have hypertension, with some studies reporting this figure as high as 100% . The primary treatment for renal artery aneurysms requiring intervention is surgical repair, including KAT. Earlier data indicated a preference for surgical repair over endovascular repair . However, as endovascular technology advances, it is increasingly preferred owing to its lower complication rates and shorter hospital stays . A meta-analysis conducted by Choksi et al. found no significant differences in mean aneurysm diameter, overall complications, and mortality between endovascular repair, surgical repair, and KAT . Renal artery aneurysms located on the main trunk of the renal artery are suitable for endovascular repair . Conversely, complex renal artery aneurysms (such as those involving renal artery bifurcations or those with distant lesions , failure of endovascular repair , and anatomical structures that are not suitable for endovascular treatment ) may require surgical repair, including KAT. Notably, the first bifurcation of the renal artery is the most common site for renal artery aneurysms . KAT effectively lowers the risk of segmental renal infarction and worsening of hypertension compared with endovascular repair . It can also be used to address residual flow issues following endovascular repair . Unlike in situ surgical repair, which is challenging to perform in an environment with inflammatory adhesion, KAT extends the timeframe for surgical repair. It allows for microsurgical arterial reconstruction in a well-exposed, controlled setting, leading to a higher patency rate . However, according to Dezfouli’s data, three kidney autotransplants experienced more intraoperative bleeding and postoperative complications than four patients each who underwent endovascular repairs and in situ surgical repairs . Gwon et al. divided a larger sample into three groups: 14 patients in the endovascular repair group, nine in the in situ surgical repair group, and 13 in the KAT group. At an average follow-up of 30.42 months, there were no statistically significant differences in the estimated glomerular filtration rate (eGFR) between the groups. Additionally, the KAT group showed no significant complications, while the endovascular repair group had three cases of renal infarction and one case of renal failure, and the in situ surgical repair group had one case of renal infarction. KAT is a valuable procedure for complex renal artery stenosis, similar to its application in renal artery aneurysms. In a study conducted by Mhaske et al. , KAT was performed in nine carefully selected patients with renal artery stenosis and refractory hypertension in whom endovascular therapy was not a suitable option. Except for one patient who experienced a fatal myocardial infarction during the procedure, there were no adverse events. Additionally, the blood pressure and renal function levels of most patients improved compared to preoperative levels 2 years after the operation. After percutaneous endovascular renal angioplasty, renal artery restenosis is a common issue observed in 32.0% of patients over a mean follow-up period of 5.1 years . KAT can serve as an effective treatment option for addressing this condition . In a study by Vijayvergiya et al. , KAT was performed in three patients who had previously undergone endovascular treatment but subsequently developed renal artery restenosis. This procedure resulted in the postoperative patency of the renal arteries. Furthermore, children young patients , and patients with non-atherosclerotic stenosis undergoing KAT for complex renal artery stenosis who did not respond to endovascular treatment also showed positive early and long-term outcomes. The safety and efficacy of KAT have been demonstrated in the treatment of complex renal artery lesions. However, according to a systematic review by Contarini , the procedure has a major postoperative complication rate of 9.4%, with an autograft loss rate of 4.1%. Nevertheless, there were no hospitalization-related deaths or aneurysm recurrences during the follow-up period. A summary of clinical studies by Ramouz , covering KAT cases up to 2021, reported a postoperative complication rate of 6.9%, with no perioperative or follow-up deaths. The postoperative primary arterial patency rate was 90.8–100% for complex renal artery disease and ranged from 75 to 93% during 2–8 years of follow-up . KAT has shown promising results for the management of hypertension. Cure and remission rates for hypertension in patients who underwent this procedure ranged from 56.3–61.0% . In a study conducted by Li et al. involving 16 patients with complex renal artery disease and hypertension, the average blood pressure significantly decreased from 150/87 mmHg before surgery to 128/77 mmHg 1 week after surgery ( P < 0.05). Additionally, the use of antihypertensive drugs significantly decreased after surgery ( P < 0.05). Another study conducted by Duprey et al. examined 53 patients with renal artery disease complicated by hypertension. After a mean follow-up period of 8 years, the mean blood pressure decreased from 143/82 mmHg before the operation to 127/72 mmHg at the latest follow-up ( P < 0.05), and the mean number of antihypertensive drugs decreased from 2 to 0.94 ( P < 0.05). Renal vein lesions Nutcracker syndrome (NCS), also known as left renal vein compression syndrome, presents with common symptoms such as hematuria and pain . It can be classified into anterior and posterior NCS based on the anatomical location of the left renal vein. Anterior NCS is more prevalent and occurs when the left renal vein is compressed between the abdominal aorta and superior mesenteric artery. Open surgery is the primary treatment for adult patients with NCS, with options including left renal vein transposition and KAT; transposition is often the preferred choice. Long-term follow-up is needed to assess the effectiveness of laparoscopic extravascular stent placement and radiointerventional endovascular stent placement for NCS . Ali-El-Dein et al. reported the successful treatment of anterior NCS using KAT, with all patients experiencing complete symptom relief. They suggested that KAT should be considered as a primary treatment option for anterior NCS. The use of KAT is more effective in normalizing left renal vein pressure because it resolves the left renal vein pull caused by left renal ptosis . Additionally, KAT can be applied in cases of recurrent symptoms after left renal vein transposition or recurrent symptoms after intravascular stent placement . Reintervention within 2 years has a 32% probability in such cases . Recently, Neppala et al. reported the successful treatment of NCS using laparoscopic-assisted kidney autotransplantation (LAKAT) without the need for painkillers, as observed during a 1-year follow-up. Ureteral diseases Complex ureteral diseases are also important indications for KAT. In a retrospective study of autologous kidney transplant patients in the US Nationwide Inpatient Sample Database, Moghadamyeghaneh et al. found that ureteral disease was the second most common indication with the lowest complication rate. They suggested KAT as an alternative procedure for treating complex ureteral diseases. Ureteral lesions often result from intrapelvic surgery, with incidence rates ranging from 1–10% . The development and widespread use of ureteroscopic surgery have contributed to an increase in the incidence of ureteral lesions . Patients with ureteral lesions often undergo multiple medical interventions , making the management of these lesions complex and challenging. KAT is a valuable option for addressing these difficulties. Ureteral avulsion, a rare yet severe injury affecting a long ureteral segment, is commonly treated with KAT . Yakupoglu et al. conducted a study of 12 cases of iatrogenic ureteral avulsion treated with KAT. Two patients required kidney removal owing to renal vein thrombosis, while the remaining patients exhibited satisfactory autograft function. With a mean follow-up time of 46.1 ± 31.7 months, the patient’s eGFR at the last visit was 79.4 ± 20.6 ml/min. Bansal et al. treated eight cases of ureteral avulsion using KAT. After a median follow-up of 11 years, no deterioration in renal function or significant complications were observed. Tonyali et al. performed KAT in combination with Boari flap management in five patients with ureteral avulsion. The procedure yielded positive outcomes, with a low complication rate and no potential risk to renal function. Tran et al. retrospectively analyzed 41 patients with complex ureteral strictures who underwent LAKAT. These strictures were mostly caused by iatrogenic injuries during urinary tract stone surgery. The average length of the stenosis was 4 cm, and most strictures were found in the upper ureter. After an average of 63 months of follow-up, three patients underwent autograft nephrectomy, while the remaining four patients experienced complications. Roux et al. reported satisfactory results in eight KATs and 14 ileal ureter substitutions for long ureteral strictures. No differences in renal function were observed before and after surgery or between the two treatment modalities. In addition to KAT and ileal ureteral surgery, other surgical methods, such as endoscopic therapy, ureteral-ureteral anastomosis, ureteral bladder reimplantation, psoas hitch ureteroneocystostomy, Boari bladder valve ureteral surgery, buccal mucosa grafting, and appendix interposition, can also be considered based on the location and extent of ureteral damage. In particular, the popularization of buccal mucosal grafting has facilitated the complex reconstruction of the ureter . KAT serves as an alternative to other surgical methods and is a suitable option for managing complex ureteral lesions, particularly those involving the upper or long segments of the ureter . Tumors Radical and partial nephrectomies are considered curative treatments for localized renal cancer. Partial nephrectomy is recommended for patients with T1aN0M0 disease because it offers better preservation of renal function, improved quality of life, and increased life expectancy . The preservation of the nephrons and avoidance of renal replacement therapy have also led to the consideration of KAT in cases of renal tumors, particularly complex renal tumors that cannot be removed in situ while preserving the organs, such as central renal tumors or tumors involving the renal hilum. Using KAT, Abraham et al. successfully treated three complex renal tumors, two of which were complex renal cancers. Only one patient with a solitary kidney required temporary hemodialysis after surgery, while the others did not develop complications. Zhu et al. reported four patients who underwent KAT for central-type renal tumors. One patient required graft removal owing to renal atrophy 6 months post-surgery, but the remaining graft continued to function effectively. However, the potential of KAT to increase the risk of malignant renal tumor recurrence is controversial, thereby limiting its application. One Oxford study reported a 5.4% recurrence rate within 10 years of KAT in 37 patients with stage T3 renal cancer. At the 10-year mark, the overall graft and patient survival rates were 82% and 75%, respectively, in 50 patients . Tran et al. conducted a retrospective study of eight patients who underwent LAKAT for renal malignancies. Of the eight patients, seven had highly complex central renal tumors, and four had solitary kidneys. Although all surgical margins were negative, 50% of the patients experienced relapse after surgery. A literature review by Ruiz et al. reported a tumor recurrence rate as high as 25–50% for complex renal cell carcinoma treated with KAT . However, the non-cancerous nature of benign kidney tumors supports the use of nephron-sparing surgery. Renal angiomyolipoma, a prevalent benign renal tumor, demonstrated favorable outcomes when KAT was employed in cases of inferior vena cava tumor embolism, as reported by Chen et al. . In cases of urothelial carcinoma affecting the renal pelvis or ureter, particularly when both kidneys are affected or when only one kidney is present, KAT can be considered a viable option . Holmäng et al. conducted a long-term follow-up study spanning 7–20 years in 23 patients with upper tract urothelial carcinoma who underwent KAT. They concluded that KAT may benefit patients with upper tract urothelial carcinoma who have only one kidney. Janssen et al. reported no instances of relapse at the 5-year follow-up among five patients with a solitary kidney who underwent KAT for upper urinary tract urothelial carcinoma. Steffens et al. reported four cases of urothelial carcinoma of the upper urinary tract in patients with solitary kidneys. The study found no postoperative complications or abnormalities in renal function. Furthermore, there were no instances of tumor recurrence during the 6–14 years of follow-up, and all four patients remained alive. Cheng et al. treated 12 patients with KAT and reported no severe complications or renal function abnormalities, except for one patient with a solitary kidney who required temporary dialysis. The researchers conducted meticulous endoscopic follow-ups and managed to remove small recurrent lesions in three patients through transurethral resection. Furthermore, there are limited reports on the utilization of KAT in the management of bilateral Wilms’ tumors , particularly in cases involving highly complex bilateral Wilms’ tumors that affect the renal sinus . However, with the advancements in robotic surgery, the need to perform bench surgery for tumor excision and renal repair is exceptionally rare so this would be a very rare use of KAT in the current era. It might be worthwhile to stress that this would be a technique of last resort in the modern era of minimally invasive nephron sparing surgery. KAT has also been explored as a therapeutic approach for retroperitoneal tumors surrounding the kidneys or ureters. These tumors often necessitate complete mass removal. In many cases, this entails the simultaneous removal of the closely attached kidney and ureter. Subsequently, the excised kidney can be preserved through KAT. Bradley et al. observed that a significant proportion (50.2%) of cases involving complete resection of retroperitoneal tumors required the removal of adjacent organs. The most commonly excised structures were the kidney, ureter, and large intestine. Remarkably, even though 20% of patients who underwent resection of retroperitoneal sarcoma simultaneously underwent nephrectomy, pathological examinations revealed that 73% of the kidneys were not invaded by the tumor . These findings partially support the application of KAT for retroperitoneal tumor resection. Good renal function and oncological results have been reported in KAT for retroperitoneal liposarcoma , adrenal neuroblastoma , mixed germ cell tumors , and gangliocytomas . However, it is important to note that certain studies have highlighted an elevated risk of complications. In a large-sample study conducted by Moghadamyeghaneh et al. , the use of KAT for the treatment of malignant renal tumors or malignant tumors involving the kidneys resulted in complications in 47.5% of patients, with 12.3% of patients experiencing transplant renal failure. Loin pain-hematuria syndrome Loin pain-hematuria syndrome (LPHS) is a rare medical condition and an uncommon indication for KAT. Patients often experience severe, unexplained, and chronic unilateral or bilateral lumbar pain associated with macroscopic or microscopic hematuria, as first reported by Little et al. . Given the lack of consensus regarding the underlying pathological mechanisms and etiology of LPHS, there are diverse treatment options for managing this condition. These options encompass the use of angiotensin-converting enzyme inhibitors or angiotensin II receptor blockers , pain medication administration , intraureteral injection of bupivacaine , renal denervation , radiofrequency ablation , and KAT . There is also a dearth of high-level evidence favoring one strategy over another in the treatment of LPHS. Some studies have indicated that 25–50% of patients with LPHS may experience spontaneous remission within 3–5 years . Therefore, adopting a progressive treatment approach, starting with conservative measures and progressing towards invasive interventions, is often considered a reasonable strategy. Surgical indications should include patients who require high doses of analgesics to control pain and those for whom non-surgical treatments have proven ineffective . Notably, at the Cleveland Clinic, 16 patients with LPHS underwent KAT, and a marked reduction in pain was observed in 75% of these individuals based on pain scores recorded 30 days post-surgery . KAT is a viable option for patients who experience recurrent pain despite undergoing other treatments. Sheil et al. reported that three out of four patients who experienced recurrent pain following renal denervation were successfully treated with KAT. However, it is worth noting that pain recurrence after KAT has been observed, with a meta-analysis conducted by Coffman reporting a recurrence rate of 37.5% . Pain recurrence after KAT usually occurs within 2 years of surgery . Some scholars have proposed that this recurrence may be related to nerve reinnervation and have advocated for intraoperative anastomosis following severing of the ureter. They believe that this approach reduces the risk of autonomic nerve reinnervation or incomplete denervation . To address the issue of pain recurrence following KAT, certain researchers have introduced the “UW-LPHS trial” and renal hiltal block to assess the potential benefits for patients undergoing KAT. Solitary or bilateral kidney disease When individuals with either a solitary kidney or both kidneys are affected by complex kidney-related diseases, such as ureteral avulsion in a patient with a solitary kidney or bilateral complex renal artery aneurysms affecting both kidneys, KAT can be considered as a highly advantageous approach. This is primarily due to its ability to preserve nephrons and maintain renal function, offering a superior alternative to nephrectomy. Morin et al. conducted KAT in nine patients with solitary kidneys. In this cohort, all patients experienced a temporary increase in serum creatinine levels postoperatively but achieved full recovery within 4–10 days. Remarkably, their renal function remained stable at baseline throughout the mean follow-up period of 89 months. Similarly, Gwon et al. performed KAT in nine patients with solitary kidney disease. One patient unfortunately died during the perioperative period owing to multiple cerebral embolisms. Except for one patient with renal cell carcinoma, the eGFR of the remaining patients did not exhibit a significant decrease 1 year after surgery. Nayak et al. managed three cases of complex renal cancer involving the renal hilum of solitary kidneys using LAKAT. All patients maintained stable renal function during the 39-month follow-up period following the surgery. In another study by Ju et al. , LAKAT was employed to treat three cases of renal cell cancer involving solitary kidneys and three cases involving both kidneys. Among these patients, two required temporary hemodialysis after surgery. However, during the follow-up period, all patients exhibited a mean serum creatinine level of less than 200 µmol/L and did not necessitate further hemodialysis. Tragically, one patient died 18 months after surgery owing to multiple metastases, but the five remaining patients were alive and free from tumor recurrence. The common indications mentioned above are illustrated in Fig. . Other uncommon indications for KAT are shown in Table . Renal artery lesions Renal artery aneurysms are uncommon, with 0.1% of the population experiencing renal artery aneurysms . Typically, renal artery aneurysms are asymptomatic . The currently recognized criteria for intervention in renal artery aneurysms include aneurysm size > 2 cm, presence in women of reproductive age, associated pain, hematuria, and drug-refractory hypertension . Notably, approximately 70% of individuals with renal artery aneurysms also have hypertension, with some studies reporting this figure as high as 100% . The primary treatment for renal artery aneurysms requiring intervention is surgical repair, including KAT. Earlier data indicated a preference for surgical repair over endovascular repair . However, as endovascular technology advances, it is increasingly preferred owing to its lower complication rates and shorter hospital stays . A meta-analysis conducted by Choksi et al. found no significant differences in mean aneurysm diameter, overall complications, and mortality between endovascular repair, surgical repair, and KAT . Renal artery aneurysms located on the main trunk of the renal artery are suitable for endovascular repair . Conversely, complex renal artery aneurysms (such as those involving renal artery bifurcations or those with distant lesions , failure of endovascular repair , and anatomical structures that are not suitable for endovascular treatment ) may require surgical repair, including KAT. Notably, the first bifurcation of the renal artery is the most common site for renal artery aneurysms . KAT effectively lowers the risk of segmental renal infarction and worsening of hypertension compared with endovascular repair . It can also be used to address residual flow issues following endovascular repair . Unlike in situ surgical repair, which is challenging to perform in an environment with inflammatory adhesion, KAT extends the timeframe for surgical repair. It allows for microsurgical arterial reconstruction in a well-exposed, controlled setting, leading to a higher patency rate . However, according to Dezfouli’s data, three kidney autotransplants experienced more intraoperative bleeding and postoperative complications than four patients each who underwent endovascular repairs and in situ surgical repairs . Gwon et al. divided a larger sample into three groups: 14 patients in the endovascular repair group, nine in the in situ surgical repair group, and 13 in the KAT group. At an average follow-up of 30.42 months, there were no statistically significant differences in the estimated glomerular filtration rate (eGFR) between the groups. Additionally, the KAT group showed no significant complications, while the endovascular repair group had three cases of renal infarction and one case of renal failure, and the in situ surgical repair group had one case of renal infarction. KAT is a valuable procedure for complex renal artery stenosis, similar to its application in renal artery aneurysms. In a study conducted by Mhaske et al. , KAT was performed in nine carefully selected patients with renal artery stenosis and refractory hypertension in whom endovascular therapy was not a suitable option. Except for one patient who experienced a fatal myocardial infarction during the procedure, there were no adverse events. Additionally, the blood pressure and renal function levels of most patients improved compared to preoperative levels 2 years after the operation. After percutaneous endovascular renal angioplasty, renal artery restenosis is a common issue observed in 32.0% of patients over a mean follow-up period of 5.1 years . KAT can serve as an effective treatment option for addressing this condition . In a study by Vijayvergiya et al. , KAT was performed in three patients who had previously undergone endovascular treatment but subsequently developed renal artery restenosis. This procedure resulted in the postoperative patency of the renal arteries. Furthermore, children young patients , and patients with non-atherosclerotic stenosis undergoing KAT for complex renal artery stenosis who did not respond to endovascular treatment also showed positive early and long-term outcomes. The safety and efficacy of KAT have been demonstrated in the treatment of complex renal artery lesions. However, according to a systematic review by Contarini , the procedure has a major postoperative complication rate of 9.4%, with an autograft loss rate of 4.1%. Nevertheless, there were no hospitalization-related deaths or aneurysm recurrences during the follow-up period. A summary of clinical studies by Ramouz , covering KAT cases up to 2021, reported a postoperative complication rate of 6.9%, with no perioperative or follow-up deaths. The postoperative primary arterial patency rate was 90.8–100% for complex renal artery disease and ranged from 75 to 93% during 2–8 years of follow-up . KAT has shown promising results for the management of hypertension. Cure and remission rates for hypertension in patients who underwent this procedure ranged from 56.3–61.0% . In a study conducted by Li et al. involving 16 patients with complex renal artery disease and hypertension, the average blood pressure significantly decreased from 150/87 mmHg before surgery to 128/77 mmHg 1 week after surgery ( P < 0.05). Additionally, the use of antihypertensive drugs significantly decreased after surgery ( P < 0.05). Another study conducted by Duprey et al. examined 53 patients with renal artery disease complicated by hypertension. After a mean follow-up period of 8 years, the mean blood pressure decreased from 143/82 mmHg before the operation to 127/72 mmHg at the latest follow-up ( P < 0.05), and the mean number of antihypertensive drugs decreased from 2 to 0.94 ( P < 0.05). Renal vein lesions Nutcracker syndrome (NCS), also known as left renal vein compression syndrome, presents with common symptoms such as hematuria and pain . It can be classified into anterior and posterior NCS based on the anatomical location of the left renal vein. Anterior NCS is more prevalent and occurs when the left renal vein is compressed between the abdominal aorta and superior mesenteric artery. Open surgery is the primary treatment for adult patients with NCS, with options including left renal vein transposition and KAT; transposition is often the preferred choice. Long-term follow-up is needed to assess the effectiveness of laparoscopic extravascular stent placement and radiointerventional endovascular stent placement for NCS . Ali-El-Dein et al. reported the successful treatment of anterior NCS using KAT, with all patients experiencing complete symptom relief. They suggested that KAT should be considered as a primary treatment option for anterior NCS. The use of KAT is more effective in normalizing left renal vein pressure because it resolves the left renal vein pull caused by left renal ptosis . Additionally, KAT can be applied in cases of recurrent symptoms after left renal vein transposition or recurrent symptoms after intravascular stent placement . Reintervention within 2 years has a 32% probability in such cases . Recently, Neppala et al. reported the successful treatment of NCS using laparoscopic-assisted kidney autotransplantation (LAKAT) without the need for painkillers, as observed during a 1-year follow-up. Renal artery aneurysms are uncommon, with 0.1% of the population experiencing renal artery aneurysms . Typically, renal artery aneurysms are asymptomatic . The currently recognized criteria for intervention in renal artery aneurysms include aneurysm size > 2 cm, presence in women of reproductive age, associated pain, hematuria, and drug-refractory hypertension . Notably, approximately 70% of individuals with renal artery aneurysms also have hypertension, with some studies reporting this figure as high as 100% . The primary treatment for renal artery aneurysms requiring intervention is surgical repair, including KAT. Earlier data indicated a preference for surgical repair over endovascular repair . However, as endovascular technology advances, it is increasingly preferred owing to its lower complication rates and shorter hospital stays . A meta-analysis conducted by Choksi et al. found no significant differences in mean aneurysm diameter, overall complications, and mortality between endovascular repair, surgical repair, and KAT . Renal artery aneurysms located on the main trunk of the renal artery are suitable for endovascular repair . Conversely, complex renal artery aneurysms (such as those involving renal artery bifurcations or those with distant lesions , failure of endovascular repair , and anatomical structures that are not suitable for endovascular treatment ) may require surgical repair, including KAT. Notably, the first bifurcation of the renal artery is the most common site for renal artery aneurysms . KAT effectively lowers the risk of segmental renal infarction and worsening of hypertension compared with endovascular repair . It can also be used to address residual flow issues following endovascular repair . Unlike in situ surgical repair, which is challenging to perform in an environment with inflammatory adhesion, KAT extends the timeframe for surgical repair. It allows for microsurgical arterial reconstruction in a well-exposed, controlled setting, leading to a higher patency rate . However, according to Dezfouli’s data, three kidney autotransplants experienced more intraoperative bleeding and postoperative complications than four patients each who underwent endovascular repairs and in situ surgical repairs . Gwon et al. divided a larger sample into three groups: 14 patients in the endovascular repair group, nine in the in situ surgical repair group, and 13 in the KAT group. At an average follow-up of 30.42 months, there were no statistically significant differences in the estimated glomerular filtration rate (eGFR) between the groups. Additionally, the KAT group showed no significant complications, while the endovascular repair group had three cases of renal infarction and one case of renal failure, and the in situ surgical repair group had one case of renal infarction. KAT is a valuable procedure for complex renal artery stenosis, similar to its application in renal artery aneurysms. In a study conducted by Mhaske et al. , KAT was performed in nine carefully selected patients with renal artery stenosis and refractory hypertension in whom endovascular therapy was not a suitable option. Except for one patient who experienced a fatal myocardial infarction during the procedure, there were no adverse events. Additionally, the blood pressure and renal function levels of most patients improved compared to preoperative levels 2 years after the operation. After percutaneous endovascular renal angioplasty, renal artery restenosis is a common issue observed in 32.0% of patients over a mean follow-up period of 5.1 years . KAT can serve as an effective treatment option for addressing this condition . In a study by Vijayvergiya et al. , KAT was performed in three patients who had previously undergone endovascular treatment but subsequently developed renal artery restenosis. This procedure resulted in the postoperative patency of the renal arteries. Furthermore, children young patients , and patients with non-atherosclerotic stenosis undergoing KAT for complex renal artery stenosis who did not respond to endovascular treatment also showed positive early and long-term outcomes. The safety and efficacy of KAT have been demonstrated in the treatment of complex renal artery lesions. However, according to a systematic review by Contarini , the procedure has a major postoperative complication rate of 9.4%, with an autograft loss rate of 4.1%. Nevertheless, there were no hospitalization-related deaths or aneurysm recurrences during the follow-up period. A summary of clinical studies by Ramouz , covering KAT cases up to 2021, reported a postoperative complication rate of 6.9%, with no perioperative or follow-up deaths. The postoperative primary arterial patency rate was 90.8–100% for complex renal artery disease and ranged from 75 to 93% during 2–8 years of follow-up . KAT has shown promising results for the management of hypertension. Cure and remission rates for hypertension in patients who underwent this procedure ranged from 56.3–61.0% . In a study conducted by Li et al. involving 16 patients with complex renal artery disease and hypertension, the average blood pressure significantly decreased from 150/87 mmHg before surgery to 128/77 mmHg 1 week after surgery ( P < 0.05). Additionally, the use of antihypertensive drugs significantly decreased after surgery ( P < 0.05). Another study conducted by Duprey et al. examined 53 patients with renal artery disease complicated by hypertension. After a mean follow-up period of 8 years, the mean blood pressure decreased from 143/82 mmHg before the operation to 127/72 mmHg at the latest follow-up ( P < 0.05), and the mean number of antihypertensive drugs decreased from 2 to 0.94 ( P < 0.05). Nutcracker syndrome (NCS), also known as left renal vein compression syndrome, presents with common symptoms such as hematuria and pain . It can be classified into anterior and posterior NCS based on the anatomical location of the left renal vein. Anterior NCS is more prevalent and occurs when the left renal vein is compressed between the abdominal aorta and superior mesenteric artery. Open surgery is the primary treatment for adult patients with NCS, with options including left renal vein transposition and KAT; transposition is often the preferred choice. Long-term follow-up is needed to assess the effectiveness of laparoscopic extravascular stent placement and radiointerventional endovascular stent placement for NCS . Ali-El-Dein et al. reported the successful treatment of anterior NCS using KAT, with all patients experiencing complete symptom relief. They suggested that KAT should be considered as a primary treatment option for anterior NCS. The use of KAT is more effective in normalizing left renal vein pressure because it resolves the left renal vein pull caused by left renal ptosis . Additionally, KAT can be applied in cases of recurrent symptoms after left renal vein transposition or recurrent symptoms after intravascular stent placement . Reintervention within 2 years has a 32% probability in such cases . Recently, Neppala et al. reported the successful treatment of NCS using laparoscopic-assisted kidney autotransplantation (LAKAT) without the need for painkillers, as observed during a 1-year follow-up. Complex ureteral diseases are also important indications for KAT. In a retrospective study of autologous kidney transplant patients in the US Nationwide Inpatient Sample Database, Moghadamyeghaneh et al. found that ureteral disease was the second most common indication with the lowest complication rate. They suggested KAT as an alternative procedure for treating complex ureteral diseases. Ureteral lesions often result from intrapelvic surgery, with incidence rates ranging from 1–10% . The development and widespread use of ureteroscopic surgery have contributed to an increase in the incidence of ureteral lesions . Patients with ureteral lesions often undergo multiple medical interventions , making the management of these lesions complex and challenging. KAT is a valuable option for addressing these difficulties. Ureteral avulsion, a rare yet severe injury affecting a long ureteral segment, is commonly treated with KAT . Yakupoglu et al. conducted a study of 12 cases of iatrogenic ureteral avulsion treated with KAT. Two patients required kidney removal owing to renal vein thrombosis, while the remaining patients exhibited satisfactory autograft function. With a mean follow-up time of 46.1 ± 31.7 months, the patient’s eGFR at the last visit was 79.4 ± 20.6 ml/min. Bansal et al. treated eight cases of ureteral avulsion using KAT. After a median follow-up of 11 years, no deterioration in renal function or significant complications were observed. Tonyali et al. performed KAT in combination with Boari flap management in five patients with ureteral avulsion. The procedure yielded positive outcomes, with a low complication rate and no potential risk to renal function. Tran et al. retrospectively analyzed 41 patients with complex ureteral strictures who underwent LAKAT. These strictures were mostly caused by iatrogenic injuries during urinary tract stone surgery. The average length of the stenosis was 4 cm, and most strictures were found in the upper ureter. After an average of 63 months of follow-up, three patients underwent autograft nephrectomy, while the remaining four patients experienced complications. Roux et al. reported satisfactory results in eight KATs and 14 ileal ureter substitutions for long ureteral strictures. No differences in renal function were observed before and after surgery or between the two treatment modalities. In addition to KAT and ileal ureteral surgery, other surgical methods, such as endoscopic therapy, ureteral-ureteral anastomosis, ureteral bladder reimplantation, psoas hitch ureteroneocystostomy, Boari bladder valve ureteral surgery, buccal mucosa grafting, and appendix interposition, can also be considered based on the location and extent of ureteral damage. In particular, the popularization of buccal mucosal grafting has facilitated the complex reconstruction of the ureter . KAT serves as an alternative to other surgical methods and is a suitable option for managing complex ureteral lesions, particularly those involving the upper or long segments of the ureter . Radical and partial nephrectomies are considered curative treatments for localized renal cancer. Partial nephrectomy is recommended for patients with T1aN0M0 disease because it offers better preservation of renal function, improved quality of life, and increased life expectancy . The preservation of the nephrons and avoidance of renal replacement therapy have also led to the consideration of KAT in cases of renal tumors, particularly complex renal tumors that cannot be removed in situ while preserving the organs, such as central renal tumors or tumors involving the renal hilum. Using KAT, Abraham et al. successfully treated three complex renal tumors, two of which were complex renal cancers. Only one patient with a solitary kidney required temporary hemodialysis after surgery, while the others did not develop complications. Zhu et al. reported four patients who underwent KAT for central-type renal tumors. One patient required graft removal owing to renal atrophy 6 months post-surgery, but the remaining graft continued to function effectively. However, the potential of KAT to increase the risk of malignant renal tumor recurrence is controversial, thereby limiting its application. One Oxford study reported a 5.4% recurrence rate within 10 years of KAT in 37 patients with stage T3 renal cancer. At the 10-year mark, the overall graft and patient survival rates were 82% and 75%, respectively, in 50 patients . Tran et al. conducted a retrospective study of eight patients who underwent LAKAT for renal malignancies. Of the eight patients, seven had highly complex central renal tumors, and four had solitary kidneys. Although all surgical margins were negative, 50% of the patients experienced relapse after surgery. A literature review by Ruiz et al. reported a tumor recurrence rate as high as 25–50% for complex renal cell carcinoma treated with KAT . However, the non-cancerous nature of benign kidney tumors supports the use of nephron-sparing surgery. Renal angiomyolipoma, a prevalent benign renal tumor, demonstrated favorable outcomes when KAT was employed in cases of inferior vena cava tumor embolism, as reported by Chen et al. . In cases of urothelial carcinoma affecting the renal pelvis or ureter, particularly when both kidneys are affected or when only one kidney is present, KAT can be considered a viable option . Holmäng et al. conducted a long-term follow-up study spanning 7–20 years in 23 patients with upper tract urothelial carcinoma who underwent KAT. They concluded that KAT may benefit patients with upper tract urothelial carcinoma who have only one kidney. Janssen et al. reported no instances of relapse at the 5-year follow-up among five patients with a solitary kidney who underwent KAT for upper urinary tract urothelial carcinoma. Steffens et al. reported four cases of urothelial carcinoma of the upper urinary tract in patients with solitary kidneys. The study found no postoperative complications or abnormalities in renal function. Furthermore, there were no instances of tumor recurrence during the 6–14 years of follow-up, and all four patients remained alive. Cheng et al. treated 12 patients with KAT and reported no severe complications or renal function abnormalities, except for one patient with a solitary kidney who required temporary dialysis. The researchers conducted meticulous endoscopic follow-ups and managed to remove small recurrent lesions in three patients through transurethral resection. Furthermore, there are limited reports on the utilization of KAT in the management of bilateral Wilms’ tumors , particularly in cases involving highly complex bilateral Wilms’ tumors that affect the renal sinus . However, with the advancements in robotic surgery, the need to perform bench surgery for tumor excision and renal repair is exceptionally rare so this would be a very rare use of KAT in the current era. It might be worthwhile to stress that this would be a technique of last resort in the modern era of minimally invasive nephron sparing surgery. KAT has also been explored as a therapeutic approach for retroperitoneal tumors surrounding the kidneys or ureters. These tumors often necessitate complete mass removal. In many cases, this entails the simultaneous removal of the closely attached kidney and ureter. Subsequently, the excised kidney can be preserved through KAT. Bradley et al. observed that a significant proportion (50.2%) of cases involving complete resection of retroperitoneal tumors required the removal of adjacent organs. The most commonly excised structures were the kidney, ureter, and large intestine. Remarkably, even though 20% of patients who underwent resection of retroperitoneal sarcoma simultaneously underwent nephrectomy, pathological examinations revealed that 73% of the kidneys were not invaded by the tumor . These findings partially support the application of KAT for retroperitoneal tumor resection. Good renal function and oncological results have been reported in KAT for retroperitoneal liposarcoma , adrenal neuroblastoma , mixed germ cell tumors , and gangliocytomas . However, it is important to note that certain studies have highlighted an elevated risk of complications. In a large-sample study conducted by Moghadamyeghaneh et al. , the use of KAT for the treatment of malignant renal tumors or malignant tumors involving the kidneys resulted in complications in 47.5% of patients, with 12.3% of patients experiencing transplant renal failure. Loin pain-hematuria syndrome (LPHS) is a rare medical condition and an uncommon indication for KAT. Patients often experience severe, unexplained, and chronic unilateral or bilateral lumbar pain associated with macroscopic or microscopic hematuria, as first reported by Little et al. . Given the lack of consensus regarding the underlying pathological mechanisms and etiology of LPHS, there are diverse treatment options for managing this condition. These options encompass the use of angiotensin-converting enzyme inhibitors or angiotensin II receptor blockers , pain medication administration , intraureteral injection of bupivacaine , renal denervation , radiofrequency ablation , and KAT . There is also a dearth of high-level evidence favoring one strategy over another in the treatment of LPHS. Some studies have indicated that 25–50% of patients with LPHS may experience spontaneous remission within 3–5 years . Therefore, adopting a progressive treatment approach, starting with conservative measures and progressing towards invasive interventions, is often considered a reasonable strategy. Surgical indications should include patients who require high doses of analgesics to control pain and those for whom non-surgical treatments have proven ineffective . Notably, at the Cleveland Clinic, 16 patients with LPHS underwent KAT, and a marked reduction in pain was observed in 75% of these individuals based on pain scores recorded 30 days post-surgery . KAT is a viable option for patients who experience recurrent pain despite undergoing other treatments. Sheil et al. reported that three out of four patients who experienced recurrent pain following renal denervation were successfully treated with KAT. However, it is worth noting that pain recurrence after KAT has been observed, with a meta-analysis conducted by Coffman reporting a recurrence rate of 37.5% . Pain recurrence after KAT usually occurs within 2 years of surgery . Some scholars have proposed that this recurrence may be related to nerve reinnervation and have advocated for intraoperative anastomosis following severing of the ureter. They believe that this approach reduces the risk of autonomic nerve reinnervation or incomplete denervation . To address the issue of pain recurrence following KAT, certain researchers have introduced the “UW-LPHS trial” and renal hiltal block to assess the potential benefits for patients undergoing KAT. When individuals with either a solitary kidney or both kidneys are affected by complex kidney-related diseases, such as ureteral avulsion in a patient with a solitary kidney or bilateral complex renal artery aneurysms affecting both kidneys, KAT can be considered as a highly advantageous approach. This is primarily due to its ability to preserve nephrons and maintain renal function, offering a superior alternative to nephrectomy. Morin et al. conducted KAT in nine patients with solitary kidneys. In this cohort, all patients experienced a temporary increase in serum creatinine levels postoperatively but achieved full recovery within 4–10 days. Remarkably, their renal function remained stable at baseline throughout the mean follow-up period of 89 months. Similarly, Gwon et al. performed KAT in nine patients with solitary kidney disease. One patient unfortunately died during the perioperative period owing to multiple cerebral embolisms. Except for one patient with renal cell carcinoma, the eGFR of the remaining patients did not exhibit a significant decrease 1 year after surgery. Nayak et al. managed three cases of complex renal cancer involving the renal hilum of solitary kidneys using LAKAT. All patients maintained stable renal function during the 39-month follow-up period following the surgery. In another study by Ju et al. , LAKAT was employed to treat three cases of renal cell cancer involving solitary kidneys and three cases involving both kidneys. Among these patients, two required temporary hemodialysis after surgery. However, during the follow-up period, all patients exhibited a mean serum creatinine level of less than 200 µmol/L and did not necessitate further hemodialysis. Tragically, one patient died 18 months after surgery owing to multiple metastases, but the five remaining patients were alive and free from tumor recurrence. The common indications mentioned above are illustrated in Fig. . Other uncommon indications for KAT are shown in Table . Traditional KAT is performed using an open surgical procedure. The process begins with the removal of the diseased kidney, followed by its trimming on the workbench under controlled conditions at a temperature of 4 °C. Subsequently, the repaired kidney is surgically reimplanted ectopically into the iliac fossa. During this procedure, the renal arteriovenous system is primarily connected to the iliac vessels, while the renal ureter is re-anastomosed to the bladder. Therefore, compared with other surgical procedures, KAT presents risks of potential short- or long-term complications that may arise following vascular and urinary tract anastomosis. Additionally, some surgeons choose not to sever the ureter but instead reconnect the renal blood vessels with the aorta and vena cava following extracorporeal repair of the kidney, and subsequently place the kidney back into its original renal fossa ; this procedure is known as orthotopic KAT. Orthotopic KAT circumvents potential complications associated with ureterocystostomy but is primarily utilized in cases involving renal vascular lesions. However, this procedure involves partial clipping of the aorta during anastomosis, which has a more significant hemodynamic impact than the partial clipping of the iliac vessels. Alterotopic KAT is a widely used method (Fig. ). Nevertheless, anastomosis and perfusion of the renal arteries may not be ideal in cases involving iliac vascular lesions, such as atherosclerosis. Consequently, KAT is relatively contraindicated in patients with severe iliac vascular disease or retroperitoneal fibrosis affecting the iliac vessels. An important contraindication of KAT is insufficient residual renal function in the affected kidney, which may be defined as a split eGFR < 20 or 15 mL/(min/1.73 m²) . In other words, if kidney function is severely compromised or minimal, KAT may not be a suitable treatment option. With the advancement of minimally invasive surgical techniques, KAT is increasingly performed using these techniques. Conventional KAT has drawbacks, such as significant trauma and a longer postoperative hospital stay. However, Ramouz et al. revealed that LAKAT results in a shorter postoperative hospital stay than open KAT . Fabrizio et al. described the first case of LAKAT involving laparoscopic nephrectomy, workbench surgery, and open kidney transplantation. They successfully treated a patient with a mucosal avulsion of the proximal ureter. Later, the use of LAKAT expanded to include conditions such as LPHS , renal artery aneurysms , and renal tumors . Tran et al. conducted a retrospective study involving the largest cohort of patients who underwent LAKAT ( n = 52). The primary indications for surgery were ureteral strictures and renal malignancy. After an average follow-up of 73.5 months, 90% of the patients retained autograft function. Complications occurred in 15% of the patients, but there were no Clavien–Dindo grade IV or V complications, and no deaths directly attributed to the surgery. Their study provided strong evidence of the safety and long-term effectiveness of LAKAT. An increasing number of scholars believe that laparoscopic nephrectomy, workbench surgery, and open kidney transplantation should be considered the gold standards for KAT. To further minimize the incision and trauma, Cui et al. successfully performed LAKAT in vivo using a 3D-printed kidney cooling sleeve. The kidney did not need to be removed from the body; all procedures, including trimming of renal vessels, anastomosis, and cold perfusion, were performed in vivo. Hoznek et al. achieved a significant milestone by performing the first robotic-assisted kidney transplantation. Following this breakthrough, Gordon et al. successfully performed the first case of robotic-assisted kidney autotransplantation (RAKAT), which was conducted to address a long segment of ureteral necrosis. Furthermore, Lee et al. developed techniques to shorten the ischemic time. However, these studies used intracorporeal robotics to treat ureteral diseases. Extracorporeal RAKAT requires the use of a GelPOINT device to extract the affected kidneys during workbench surgery. Yao et al. expanded the application of RAKAT to include complex renal artery aneurysms. They operated by making A Pfannenstiel incision to insert the GelPOINT device, along with six small ports. However, this method requires multiple ports. Kaouk et al. reported their experience with a periumbilical single-port robotic platform, completing three cases of autologous kidney transplantation and six cases of allogeneic kidney transplantation. In these cases, the kidneys were repaired in vitro using the GelPOINT device, and ideal postoperative recovery and renal function were observed. Thus, the single-port robotic platform holds promise for reducing postoperative complications. They later summarized eight cases of RAKAT using the same procedure, with no postoperative complications or graft function loss, stable renal function, and improved back pain symptoms . This approach would be ideal in terms of minimizing patient morbidity and likely represents the future of RAKAT. In a study by Decaeestecker et al. , seven patients received extracorporeal or intracorporeal RAKAT for benign diseases. One case of Clavien–Dindo grade > II complications occurred after the operation, and the median hospitalization time was 5 days. No discomfort or abnormal renal function was observed 3 months after surgery. Among 29 patients with benign disease in Breda’s study , 15 underwent extracorporeal RAKAT, while 14 underwent intracorporeal RAKAT. Intracorporeal RAKAT had shorter cold ischemia and total ischemia durations, but extracorporeal RAKAT showed faster recovery of transplanted kidney function. With a total complication rate of 34.4% within 90 days after surgery, the Clavien–Dindo > II was 13.8%. A renal function level similar to the preoperative level was observed 1 year after surgery. The study by Mejia et al. represents the largest cohort of RAKAT to date , with 32 patients who underwent intracorporeal RAKAT: three with LPHS and 29 with NCS. All patients successfully completed the RAKAT procedure, with 63% showing complete improvement in pain symptoms, 50% experiencing complications, 6% of graft failure, and no deaths during a mean follow-up of 10.9 months. Overall, RAKAT results in smaller surgical scars and a faster postoperative recovery . Other literature for RAKAT is summarized in Table . However, there are still not enough documented cases, with the majority involving benign primary conditions. Additional investigations are required to assess the long-term results in patients receiving RAKAT. Currently, KAT is often considered an elective treatment option for renal vascular lesions, ureteral diseases, tumors, and loin pain-hematuria syndrome. It has an accurate effect and is the final treatment method for patients requiring kidney preservation, particularly those with conditions affecting a solitary kidney or both kidneys. On the technical front, LAKAT has demonstrated both safety and efficacy. The application of the robotic surgical systems have yielded satisfactory results in KAT. Currently, the primary focus of the advancement of KAT techniques is to minimize surgical trauma. However, it is important to note that research in the field of KAT predominantly consists of case series and retrospective studies, lacking higher levels of evidence. Despite these limitations, KAT remains a valuable and increasingly refined therapeutic option for patients with complex renal conditions requiring kidney preservation.
Phosphoproteomics for studying signaling pathways evoked by hormones of the renin‐angiotensin system: A source of untapped potential
e7abf898-eccf-48e3-a3c3-619108aafd0b
11737475
Biochemistry[mh]
INTRODUCTION The Renin‐Angiotensin System (RAS) is a complex neuroendocrine system composed of the protein angiotensinogen (AGT), peptide hormones derived from AGT after limited proteolysis, and several receptors (Figure ). RAS components are found in the vast majority of tissues, controlling a large variety of processes including arterial blood pressure and extracellular fluid volume, learning/memory, metabolism, inflammation, fibrosis, reproduction, cell proliferation etc. Disturbances in the RAS are involved in several diseases such as hypertension and related organ damage, kidney disease, cancer, fibrotic disease, ischemic brain damage, among others. , , Understanding the function of the RAS is, therefore, paramount for preventing and treating RAS‐associated disorders. Knowledge of signaling mechanisms elicited by RAS effectors is essential for a deeper understanding of the molecular mechanisms underlying RAS functions. RAS‐related signaling mechanisms have been investigated by classical methods (e.g., Western blotting) for many decades and created a solid foundation of knowledge. However, antibody‐based methods have limitations such as availability of commercial antibodies with high specificity and sufficient sensitivity. Another limitation is the slow throughput due to the “one protein at a time” approach. Therefore, the investigation of changes in abundance or phosphorylation of proteins within signaling cascades by antibody‐based methods is limited to a quite restricted number of target proteins. Only recently, mass spectrometry (MS)‐based “antibody‐free” approaches have been added to the armamentarium for studying the RAS. Generally, MS‐based techniques have the advantage of very high sensitivity and of the possibility to determine changes in abundance of thousands of proteins at the same time. Importantly, MS‐based techniques are also suitable for measuring agonist‐induced post‐translational modifications (PTMs) such as changes in protein phosphorylation, methylation or glycosylation within the entire cell/tissue proteome. Since PTMs, in particular phosphorylations, are often responsible for changing the activation status of a protein, particularly enzymes, information on PTMs and the respective bioinformatical analysis of such data allows inferences on the activation/inhibition of signaling cascades or other relevant biological processes. This is an important advantage over studies on protein abundance or mRNA expression only, since data on expression do not allow conclusions on protein activity. This article reviews existing studies which applied MS‐based techniques for studying RAS signaling. It focuses on studies applying phosphoproteomics as this technique allows monitoring protein phosphorylation/dephosphorylation events associated with signal transduction. In addition, our article provides an overview over signaling pathways that are shared by different receptors of the protective arm of the RAS as identified by phosphoproteomics. Finally, we discuss knowledge gaps which could be addressed in the future using MS‐based approaches. RAS LIGANDS, ENZYMES, AND RECEPTORS The discovery of the RAS began in 1898, when Tigerstedt and his assistant Bergman working at the Karolinska Institute in Sweden reported that a protein (renin) extracted from rabbit kidney induced pressor effects when injected into another rabbit. Forty years later, two independent research groups identified the octapeptide angiotensin (Ang) II (H‐DRVYIHPF‐OH) to be the active hormone responsible for this pressor effect (refer to for an Ang II historical review). Ang II is produced from AGT in a two‐step enzymatic process involving renin and angiotensin‐converting enzyme (ACE) (Figure ). In the 1970s, Ang III (H‐RVYIHPF‐OH) was identified as a product of the enzymatic removal of the aspartate residue from the N‐terminal of Ang II. , During the 1970–80s, studies involving Ang II analogues such as Sar 1 ‐Ala 8 ‐Ang II, Sar 1 ‐Cys(Me) 8 ‐Ang II or Ang III revealed considerable variability in the responses elicited by these agonists across different tissues indicating the involvement of two or more receptors in mediating the responses of RAS effectors. , This assumption was finally proven in 1989, when ligands specific for certain receptor subtypes became available such as the non‐peptide compounds DuP 753/Ex89 (losartan, AT 1 antagonist), PD123319 (AT 2 antagonist), and the Ang II‐peptide analogue CGP42112A (AT 2 agonist). Using these new tools, two independent research groups observed differential displacement of Ang II by these compounds in various tissue preparations, which led to the identification of two distinct receptor subtypes termed the AT 1 receptor (AT 1 R) and the AT 2 receptor (AT 2 R). , Existence of these two receptor subtypes was finally proven in the early 1990s with the cloning of the respective cDNA sequences. , The first reports on Ang IV (H‐VYIHPF‐OH) were published in the 1960–70s and were based on structure‐to‐function studies using Ang II N‐terminal fragments. At that time, however, Ang IV was deemed to be biologically inactive. Only from the 1980s, biological effects associated with Ang IV were unveiled, mainly showing modulation of animal behavior such as improvement of learning and memory recall. Ang IV exerts some of its effects by low‐affinity binding to the AT 1 R and the AT 2 R. However, the main endogenous target for Ang IV is the insulin‐regulated aminopeptidase (IRAP), also referred to as AT 4 R, as only identified in 2001. IRAP has enzymatic activity which is inhibited by Ang IV upon binding. Ang IV can be formed directly from Ang II by dipeptidyl aminopeptidases (DAP) or as an end‐product of Ang II N‐terminal processing by aminopeptidases (AP) with Ang III as an intermediate of this process (Figure ). Studies from the late 1980s reported biological effects of Ang‐(1–7) (H‐DRVYIHP‐OH), , a peptide previously considered as an inactive product of Ang II degradation. However, only in 2003 the receptor Mas (MasR) was identified as the receptor for Ang‐(1–7). In the second half of the 2000s and the first half of the 2010s, two more RAS peptides were discovered: Ang A (H‐ARVYIHPF‐OH) acting via the AT 1 R to elicit similar effects as Ang II / AT 1 R, and Alamandine (H‐ARVYIHP‐OH) and its receptor MrgD. Alamandine and Ang‐(1–7) are both 7‐mer peptides differing only at position 1; Ala 1 in Alamandine versus Asp 1 in Ang‐(1–7) (Figure ). It is believed that an enzyme with a carboxylase activity is responsible for producing Alamandine by removing a CO 2 group from the side chain of Asp 1 to produce Ala 1 , thus transforming Ang‐(1–7) into Alamandine (Figure ). However, to this date, such enzyme is yet to be identified. Figure represents a most up‐to‐date view of the RAS including its two functional arms: the classical (canonical) axis and the protective (non‐canonical) axis. , The main receptor of the classical axis is the AT 1 R, whereas the main receptors of the protective axis include the AT 2 R, MasR, and MrgD (Figure ). Most recently, Ang‐(1–5), a degradation product of Ang‐(1–7), was shown to be another biologically active hormone of the RAS. , , Thorough characterization of the peptide revealed that it is an endogenous AT 2 R agonist, which elicits effects typical for AT 2 R activation such as nitric oxide (NO) synthesis via protein kinase B (Akt)/endothelial nitric oxide synthase (eNOS) signaling, relaxation of mouse and human resistance arteries and lowering of blood pressure in male and female normotensive mice. Another recent addition to the RAS peptide family was the endogenous peptide Alamandine‐(1–5) [Ala‐(1–5)] (H‐ARVYI‐OH). Ala‐(1–5) seems to signal through the protective RAS receptors: MasR, MrgD and AT 2 R. However, only some effects of Ala‐(1–5) are typical for MasR, MrgD or AT 2 R mediated actions (e.g., increased NO production and reduction of blood pressure in normotensive (Wistar) and hypertensive (SHR) rats), whereas others are not (e.g., constriction of mouse aortic rings and reduced contractility of cardiac myocytes). The unconventional effects elicited by Ala‐(1–5) suggest that it potentially binds to different receptor sites and/or elicits G‐protein‐independent signaling pathways. Effects evoked by the two RAS arms are usually counter‐regulatory. For example, while the activation of the classical axis leads to vasoconstriction, inflammation, fibrosis, and proliferation, activation of the protective axis leads to vasodilation, anti‐inflammatory, anti‐fibrotic, and antiproliferative effects (Figure ). PHOSPHOPROTEOMICS FOR THE STUDY OF CELL SIGNALING WITHIN THE RAS AND BEYOND Proteomics encompasses the investigation of a specific proteome, which is defined as a set of proteins being synthesized or degraded within a particular cell or tissue within a specific time. The development of proteomics as we know it today took place in the 1990s, but its rapid advancement accelerated from the 2000s onwards. This progress was primarily propelled by the introduction of novel sample preparation techniques, , , , , by more sophisticated mass spectrometers, , , , , , , and by the development of new bioinformatic tools. , , , , , , , , From the early days of proteomics, it was evident that novel strategies were required to extend the application of this technique to the study of protein phosphorylation, a PTM that typically occurs at low abundance and therefore cannot be identified through conventional proteomics approaches. To overcome this issue, phosphopeptide enrichment techniques were developed, enabling the identification, localization and quantification of phosphorylation sites. Therefore, in contrast to proteomics, which serves to quantify protein abundances (proteome quantification), phosphoproteomics quantifies protein phosphorylation levels (phosphoproteome quantification) thus allowing conclusions on the activation level of certain proteins such as kinases/phosphatases within signaling cascades. When applying phosphoproteomics it is important to run proteomics as well on the same samples so that phosphorylation levels can be normalized to protein abundances. Proteomics and phosphoproteomics can be applied in two distinct manners: the targeted and the untargeted approaches. Untargeted proteomics and phosphoproteomics are a hypothesis‐generating approach and do not require the pre‐definition of certain proteins of interest. Instead, it maps the global proteome or phosphoproteome of a cell or tissue for changes in protein expression or phosphorylation in response to a certain intervention, thereby potentially identifying so far unknown biological processes. In contrast, targeted proteomics/phosphoproteomics is a hypothesis‐driven approach that quantifies pre‐defined proteins and phosphoproteins (targets) to be assessed in a similar way as the antibody‐based methods (i.e. Western blotting) but without the need for antibodies and without the restrictions regarding the number of investigated proteins per experiment. While the untargeted approach is typically favored during the discovery phase of a research project, the targeted approach can be employed to validate findings obtained during the discovery phase. Figure illustrates a typical workflow for investigating a specific proteome and phosphoproteome within the context of cellular signaling. Since changes in the phosphorylation status of certain proteins at specific residues is a most common feature of cell signaling cascades, detection of such events (phosphorylation / dephosphorylation) by MS‐based phosphoproteomics represents a potent tool for unbiased exploration of cell signaling pathways. Nevertheless, so far only a few studies have investigated signaling mechanisms within the RAS by phosphoproteomics meaning that the power of this technique has not yet been fully taken advantage of in RAS research. Since 2010, the year of the first two studies on RAS signaling using phosphoproteomics, , only 22 articles have been published which in some way or the other had to do with signaling and RAS components. This contrasts sharply with the more than 17000 studies on RAS signaling since 2010 using other techniques or the more than 3500 publications that have employed phosphoproteomics to study cellular signaling networks unrelated to RAS in the same timeframe (PubMed searches made in November 2024 using the following search terms: “angiotensin AND signalling”; “signalling AND phosphoproteome”; “angiotensin AND signalling AND phosphoproteome”). Therefore, in the following sections we will highlight the power of phosphoproteomics for the investigation of RAS‐related intracellular signaling, aiming to spark the interest in phosphoproteomics by providing a critical assessment of the utilization of this technology and by reviewing those studies which have applied phosphoproteomics in RAS research so far. Table summarizes the key publications discussed in this review, which helped defined what is now known related to RAS signaling. THE AT 1 RECEPTOR The AT 1 R is a classical class A G‐protein‐coupled receptor (GPCR) which signals through G q and G 11/12 pathways and through β‐arrestin. AT 1 R signaling mechanisms have been well characterized by conventional methods and include activation of phospholipase C, IP 3 ‐triggered calcium release, protein kinase C mediated cell proliferation and smooth muscle contraction, as well as activation of the Rho kinase, MAPK/ERK (mitogen‐activated protein kinases/extracellular signal‐regulated kinases), JAK/STAT (tyrosine‐protein kinases JAK/signal transducer and activator of transcription), NF‐κB (nuclear factor kappa‐light‐chain‐enhancer in B cells), TGF‐β (transforming growth factor‐beta), Src family (proto‐oncogene tyrosine‐protein kinase Src), PI3K (phosphatidylinositol 4‐phosphate 3‐kinases)/Akt, and CaMK (calcium/calmodulin‐dependent protein kinases) pathways. According to our literature search, five studies have been published applying phosphoproteomics for studying AT 1 R signaling, , , , , four of which investigated signaling mechanisms of biased agonists. Generally, depending on the agonist applied, stimulation of GPCRs can result in activation of either the entire signalosome or only of a subset of signaling mechanisms. This phenomenon is known as biased agonism and was initially observed for the PACAP type I (PAC1) receptor and the muscarinic M1 receptor, and subsequently, also for several other GPCRs including the AT 1 R. , In case of the AT 1 R, biased ligands selectively activate (with different efficacy profiles) either G‐protein‐dependent pathways or β‐arrestin signaling. Before reviewing those phosphoproteomics studies which investigated AT 1 R biased signaling, we would like to review those two studies first, which looked at AT 1 R signaling in a general way. One of these studies examined AT 1 R signaling in AT 1 R‐transfected immortalized podocytes (AB8 3F‐ AT 1 R). Treatment with Ang II (100 nM, 15 mins) led to changes in the phosphorylation status of 6323 protein fragments that could be assigned to 2081 distinct proteins. As expected for a classical class A GPCR, phosphorylation events were more frequent than dephosphorylation events. Within the phosphorylated sites, the authors observed that the MAPK motif (proline at position +1) was enriched. This is consistent with substantial evidence in the literature that the MAPK pathway is involved in AT 1 R signaling. , Other proteins found to undergo large changes in their phosphorylation status were tenascin, integrin‐β6, neuroblast differentiation‐associated protein, LCP1 (L‐plastin), optineurin, plasminogen activator inhibitor 1, serine/threonine protein kinase D2, protein bicaudal C homolog 1, phalladin, and ephrin type‐A receptor. Gene ontology analysis of Ang II‐treated AB8 3F‐AT 1 R phosphoproteomics data revealed an enrichment of terms related to actin cytoskeleton and lamellipodia, among them the protein LCP1 (phosphorylated at Ser 5 ) which is a member of the α‐actinin family and important for actin assembly. Ang II‐induced phosphorylation of LCP1 at Ser 5 was validated by Western blot analysis and shown to be indeed AT 1 R‐mediated, since it was inhibited by the AT 1 R‐antagonist losartan. In further experiments using specific kinase inhibitors, the authors could show that Ang II‐induced phosphorylation of LCP1 was dependent on activation of ERK, RSK (ribosomal S6 kinase), PKC (protein kinase C) and PKA (cAMP‐dependent protein kinase), Finally, functional experiments demonstrated Ang II‐induced trafficking of LCP1 together with actin to the cell margins as well as Ang II‐induced formation of filopodia and cell–cell contacts that was dependent on Ser 5 ‐LCP1 phosphorylation. The authors compared the outcome of their phosphoproteomic study with a study from Jakob L Hansen's group, which investigated AT 1 R signaling by phosphoproteomics applying a (widely) identical protocol (100 nM Ang II for 3 and 15 mins) but in a different renal cell line, AT 1 R‐transfected human embryonic kidney (HEK)‐293 cells. The comparison revealed that 121 proteins which had increased phosphorylation levels in response to Ang II, were identical in both studies, whereas there were 323 phosphoproteins only detected in podocytes and 406 phosphoproteins only detected in AT 1 R‐HEK‐293 cells upon AT 1 R activation. Some of the HEK‐293 specific phosphoproteins may be attributable to the 3 mins stimulation, since apparently proteins from both stimulations (3 and 15 mins) were analyzed together, whereas in podocytes, only the 15‐min time‐point was investigated. Nevertheless, the important lesson from this comparison is that it is not possible to get a general picture of the AT 1 R‐coupled signaling network from a single study, since results will always be cell/tissue specific and differ from other cells/tissues. It should also be noted that both studies used transfected cells with an artificially high expression level of AT 1 Rs. This may have an impact on the results meaning that AT 1 R‐mediated signaling in primary cells with endogenous receptor expression may be different from signaling in overexpressing cell lines. Interestingly, the study by the Hansen group included a comparative phosphoproteomics approach in order to distinguish between G‐protein‐mediated and β‐arrestin‐mediated AT 1 R signaling by treating AT 1 R‐HEK‐293 either with the unbiased agonist Ang II (100 nM) or with the biased agonist [Sar 1 ,Ile 4 ,Ile 8 ]Ang II (SII Ang II; 18.7 μM) which activates Gα q protein‐independent (including β‐arrestin) signaling. The authors only included phosphosites with an increase (not a decrease) in phosphorylation level into further analysis. They found 1183 of such regulated phosphosites on 527 phosphoproteins, with 427 (36%) phosphosites regulated in response to SII Ang II meaning they are attributable to Gα q protein‐independent AT 1 R signaling. Further analysis of the data generally revealed a much more diverse and frequent abundance of Gα q protein‐independent AT 1 R signaling than previously thought. This included a considerable importance of the AGC/CAM kinase family, which includes for example PKD (protein kinase D), PKC and CaMKII, for both Ang II and SII Ang II‐induced signaling. Unexpectedly, it was noted that all PKD proteoforms were enriched in the dataset of AT 1 R‐HEK treated with SII Ang II coinciding with an increased phosphorylation of peptides with the consensus PKD phosphorylation motif. In further experiments using pharmacological inhibitors, the authors found that PKD activation by SII Ang II in AT 1 ‐HEK (i.e., Gα q protein‐independent) involved the Ras/ROCK (Rho‐associated protein kinase)/PKCδ pathway, whereas PKD activation by Ang II (Gα q protein‐dependent and ‐independent) also involved other PKCs. Other findings comprised Gα q protein‐dependence of activation of transcription factors such as c‐JUN (transcription factor Jun), HOXA3 (homeobox protein HOX‐A3), and EP400 (E1A‐binding protein p400), phosphorylation of proteins promoting migration and phosphorylation of other membrane receptors such as the insulin receptor, the insulin‐like growth factor 2 receptor or the β2‐adrenergic receptor, whereas Gα q protein‐independent signaling included reduced transcriptional activity in the nucleus and phosphorylation of CXC chemokine receptor 4 or fibroblast growth factor receptor 3 (among others). Phosphorylation of proteins involved in receptor endocytosis, anti‐apoptosis, cytoskeletal rearrangement and cell cycle control were found for both signaling mechanisms, although the exact proteins in each pathway were not identical. In the year of publication of the study by the Hansen group (2010), the group of Robert Lefkowitz also applied phosphoproteomics for the study of AT 1 R signaling using the exact same cell type (AT 1 R‐HEK‐293; the Lefkowitz group provided these cells to the Hansen group), but with a focus on Gα q protein‐independent/β‐arrestin‐dependent AT 1 R‐signaling by treating cells with SII Ang II only. The incubation time was 5 min and, therefore, similar to but not identical with the incubation times in the study by the Hansen group, which were 3 and 15 min. The dose of SII Ang II was slightly higher in the Lefkowitz study (30 μM) than in the Hansen study (18.7 μM). Using this approach, the authors identified 4552 phosphopeptides from 1555 phosphoproteins, of which 288 phosphopeptides met their rigorous definition of significance. In 222 phosphopeptides (from 171 phosphoproteins), phosphorylation levels were increased, and in 66 phosphopeptides (from 53 phosphoproteins), phosphorylation levels were decreased in response to the biased agonist SII Ang II. For verifying their experimental approach, the authors successfully confirmed 5 of the identified phosphoproteins by Western blotting. They further noted an overproportional abundance of kinases among the phosphopeptides (38 protein kinases) as, for example, ERK1, c‐Src, Akt, mTOR (mammalian target of rapamycin), and CAMK2, which they could (partly) confirm by additional bioinformatic analysis (Motif‐X, Kinase Enrichment Analysis KEA). In a further approach for analyzing the entire dataset, the authors applied a combination of bioinformatic tools including gene ontology (GO) analysis, Kyoto Encyclopedia of Genes and Genomes (KEGG) canonical pathway analysis, and Ingenuity Network Analysis and found an enrichment of terms related to actin cytoskeleton reorganization. Together with data from a previous study, which identified an β‐arrestin interactome by a global proteomics approach, the authors outlined an AT 1 R‐coupled, β‐arrestin‐dependent cytoskeletal reorganization subnetwork. A central role in this network played the slingshot phosphatase, which was found to be significantly dephosphorylated at Ser 937 and Ser 940 by SII Ang II treatment, which is an activation mechanism. Knockdown of β‐arrestin 1 and 2 by siRNA prevented SII Ang II‐induced slingshot activation thus showing β‐arrestin‐dependence of the effect. In a series of further, elegant experiments, the authors showed that slingshot dephosphorylates cofilin at Ser 3 , which is a mechanism related to activation of actin reorganization and lamellipodia formation. This AT 1 R‐induced effect seems to involve the formation of a β‐arrestin‐slingshot‐cofilin complex that may additionally contain the phosphatase PP2A (protein phosphatase 2), which is able to dephosphorylate and thus activate slingshot. Finally, the authors performed yet another series of bioinformatic analyses of their dataset, this time applying an inference algorithm and a literature‐based kinome network combined with known β‐arrestin‐regulated proteins and the results from the kinase prediction part of their study to construct an interconnecting network of AT 1 R‐ β‐arrestin mediated signaling events. This way they found that major areas of AT 1 R‐ β‐arrestin actions are the regulation of cell proliferation and cell cycle dynamics, cytoskeletal reorganization, adhesion and inter‐cellular communication. Although the two studies by the Hansen and the Lefkowitz groups had very similar objectives and designs, their results are only partially congruent with only ≈30% identical hits. One reason may be the different methods for phosphopeptide enrichment in the two studies, another the stricter criteria for significance in the Lefkowitz study. However, the difference is also an expression of the fact that there is a risk of false‐positive or false‐negative hits in the (phospho)‐proteomics datasets. Nevertheless, and importantly, the major functional areas, which were predicted to be modulated by AT 1 R‐ β‐arrestin signaling in the Hansen and Lefkowitz studies, were widely identical. A third study by Louis Luttrell's group also investigated SII Ang II‐induced AT 1 R‐ β‐arrestin signaling and Ang II‐induced global AT 1 R signaling by phosphoproteomics. As the Hansen/Lefkowitz studies, the authors used AT 1 R‐HEK‐293 cells treated with SII Ang II (50 μM) or with Ang II (100 nM). The incubation time was 5 min. This study revealed much less phospho‐modified proteins than the other two for methodological reasons—use of two‐dimensional gel electrophoresis (2DGE) and matrix‐assisted laser desorption/ionization mass spectrometer (MALDI‐MS) instead of liquid chromatography coupled to electrospray ionization mass spectrometer (LC‐ESI‐MS). The authors identified 36 phosphoproteins, of which 16 were only modified after SII Ang II meaning they are part of the AT 1 R‐ β‐arrestin axis. Two peptide inhibitors of protein phosphatase 2A (I1PP2A/I2PP2A) and prostaglandin E synthase 3 (PGES3) were selected for further validation. Additional co‐immunoprecipitation studies suggested the existence of I2PP2A/PP2A/Akt‐β‐arrestin and PGSE3‐β‐arrestin complexes. Phosphorylation of I2PP2A within the β‐arrestin/I2PP2A/PP2A/Akt complex led to inhibition of PP2A activity and subsequently to activation of Akt through Thr 308 dephosphorylation. Furthermore, the authors reported formation of a β‐arrestin‐PGSE3 complex in response to SII Ang II which was responsible for increased PGE 2 production. This effect could be abolished by knocking down β‐arrestin. The study of the Luttrell group was of particular importance because some of the findings (SII Ang II‐induced I2PP2A phosphorylation and PGE 2 synthesis) in the AT 1 R‐HEK‐293 cell line were confirmed in primary cells of the cardiovascular system, namely in vascular smooth muscle cells, whereas the other two studies were entirely performed in the artificial system of AT 1 R‐overexpressing HEK‐293 cells. None of the studies investigated any functional (cardiovascular) effects in ex vivo or in vivo experiments such as SII Ang II‐induced vasorelaxation through PGE 2 or through Akt‐mediated eNOS activation. However, increased PGE 2 production in response to the AT 1 R‐β‐arrestin‐biased agonist Des‐Asp 1 ‐Ang I was shown in human umbilical vein endothelial cells in a subsequent study by another group thus pointing to induction of a vasorelaxant mechanism by AT 1 R‐β‐arrestin signaling. In addition to the above studies, which looked at the entire AT 1 R‐coupled signaling network, a study by Gareri and co‐authors took a more targeted approach and specifically looked at changes in phosphorylation of the C‐terminal tail of the AT 1 R in response to biased (TRV023) and unbiased (Ang II) agonists. For this purpose, FLAG‐tagged human AT 1 Rs were enriched from HEK‐293 cell lysates using FLAG‐tag affinity chromatography and, subsequently, phosphoproteomics performed on the purified receptor. Applying this unique approach, the authors indeed identified different phosphorylation patterns (so‐called barcodes) of the AT 1 R C‐terminal tail in response to the biased or unbiased agonist, respectively. A major finding of the study was that for full β‐arrestin recruitment, phosphorylation of a certain cluster of serine and threonine residues in the proximal and middle portions of the tail was necessary. The authors concluded that binding of biased or unbiased agonists triggers different receptor conformations thus inducing divergent phosphorylation patterns at the C‐terminus of the receptor. Interestingly, a few years after the above‐reviewed phosphoproteomics studies on AT 1 R‐β‐arrestin‐biased signaling, the Lefkowitz group was able to show that biased or unbiased AT 1 R agonists stabilize the AT 1 R in distinct receptor conformations, which explains the different types of signaling mechanisms elicited by G‐protein‐ or ‐β‐arrestin‐coupled receptor activation. Figure illustrates the main findings of AT1R signaling using phosphoproteomics. THE AT 2 RECEPTOR As the AT 1 R, the AT 2 R is categorized as a class A G‐protein‐coupled receptor. However, signaling of the AT 2 R as determined by conventional methods and phosphoproteomics (the latter reviewed in detail in the following) is fundamentally different from classical GPCRs such as the AT 1 R, which made some researchers conclude that the AT 2 R may represent a distinct subclass of class A GPCRs. For example, the AT 2 R does not signal through G q and G 11/12 pathways, it does not recruit or signal through β‐arrestin and it is not internalized. , Instead, it signals through coupling to Gα i/o —which, however, does not lead to a decrease in cAMP formation as usual for other GPCRs —or it signals through G‐proteinindependent mechanisms such as coupling to the AT 2 R‐interacting protein (ATIP). , Studies on AT 2 R signaling by low‐throughput techniques consistently showed that upon agonist binding, the AT 2 R activates protein phosphatases such as SHP‐1 [Src homology region 2 (SH‐2) domain‐containing phosphatase 1], PP2A and MKP‐1 (MAPK phosphatase‐1). , , These activated protein phosphatases interfere with other kinase‐driven signaling pathways in an inhibitory way. For example, PP2A‐ and Gα i ‐dependent dephosphorylation of ERK‐2 leads to inhibition of insulin‐induced ERK1/2 signaling. AT 2 R signaling can also involve kinase activation like for example Akt, which is phosphorylated at the activating residue Ser 473 in response to AT 2 R stimulation. , Akt promotes eNOS activation through phosphorylation of eNOS‐Ser 1177 , which ultimately increases NO release by endothelial cells. In addition to eNOS‐Ser 1177 phosphorylation, eNOS activation by the AT 2 R also involves dephosphorylation of eNOS by phosphatases. The above‐reviewed signaling pathways—and others reviewed elsewhere —promote the classical effects of AT 2 R activation such as natriuresis, vasodilation, , anti‐inflammation, and antiproliferation, , as illustrated in Figure . The first study deploying time‐resolved, quantitative phosphoproteomics for the study of AT 2 R signaling used an untargeted approach for investigating early changes in the phosphorylation pattern of primary human aortic endothelial cells (HAEC) in response to short‐term (up to 20 min) AT 2 R activation by the small molecule agonist compound 21 (C21). Unexpectedly, the study revealed that in contrast to the prevailing notion that AT 2 R signaling is mainly driven by phosphatase activation, the frequency of kinase‐driven phosphorylation events was slightly higher. Kinase prediction identified the involvement of Akt in these phosphorylations, and also kinases that are known to activate phosphatases. In order to identify novel AT 2 R‐coupled signaling pathways with this hypothesis‐generating approach, proteins with modified phosphorylation levels were first analyzed by gene ontology (GO), a bioinformatic method for categorizing genes/proteins according to their molecular function, cell compartments or biological processes, followed by STRING analysis for identification of functional protein networks. These analyses unveiled an enrichment of terms related to cell proliferation and apoptosis. Within these terms, the authors selected, HDAC1 (histone deacetylase‐1), which was dephosphorylated following C21 treatment at Ser 421/423 (as subsequently confirmed by Western blotting) and which took a central position in the STRING‐analysis cluster related to proliferation/apoptosis. The authors used this result derived from the untargeted approach to further explore a potential, novel, AT 2 R‐induced signaling pathway that is initiated by AT 2 R‐induced Ser 421/423 ‐HDAC1 dephosphorylation in a targeted approach. They could eventually show that AT 2 R‐induced HDAC1 dephosphorylation attenuates its deacetylase activity leading to lessened deacetylation of the tumor suppressor p53, which is an activation mechanism that leads to nuclear translocation of p53 and culminates in antiproliferative and anti‐apoptotic effects of AT 2 R activation—functionally shown in this study in HAEC and in PC9, a non‐small lung cancer cell line. In a second study with a similar protocol (up to 20 min AT 2 R stimulation in HAEC) but an improved MS methodology with higher sensitivity, the same authors used the newly identified endogenous AT 2 R agonist Ang‐(1–5) for receptor activation. In this analysis and in contrast to the study with C21 reviewed above, dephosphorylations were slightly prevailing over phosphorylations. This difference may be due to the improved methodology in the 2nd study, which allowed the detection of many more sites with changes in phosphorylation status than the 1st study—including tyrosine phosphorylations, which could not be detected by the methodology of the 1st study, but which play an important role in AT 2 R signaling as was already detected by conventional methods years ago. Another reason for the slightly different result of the two studies in terms of the phosphorylation/dephosphorylation ratio may be that C21 and Ang‐(1–5) act as biased agonists and do not elicit the exact same array of signaling cascades. Importantly, despite these differences in the phosphorylation pattern, both phosphoproteomic studies clearly point to tissue protective, antiproliferative actions of the AT 2 R. In the study with Ang‐(1–5) as AT 2 R agonist, this was evident from performing a KEGG pathway analysis of the data, which detects enrichment of phospho‐modified proteins within defined signaling pathways pointing to activation or inhibition of these pathways by the applied agonist. In case of AT 2 R activation by Ang‐(1–5), KEGG pathway analysis revealed inhibition of VEGF (vascular endothelial growth factor) and HIF‐1 (hypoxia‐inducible factor‐1) signaling, inhibition of leucocyte transendothelial migration as well as effects on the actin cytoskeleton and on adhesion. These results still await confirmation by a 2nd method and by functional tests in future studies. THE Mas RECEPTOR As the AT 2 R, the MasR, which is the main receptor for Ang‐(1–7), is a class A GPCR with unconventional signaling mechanism as defined by conventional methods. Interestingly, MasR and AT 2 R signaling mechanisms have a lot of similarities. For example, as described for the AT 2 R in the preceding section, MasR‐mediated vasodilation induced by Ang‐(1–7) is resulting from an increase in NO release. , , Studies using classical approaches have shown that Ang‐(1–7)‐induced NO release involves a rapid and long‐lasting phosphorylation of eNOS at Ser 1177 after 5 to 30 min of treatment resulting in eNOS activation and NO production as shown in HAEC and MasR‐transfected CHO cells. , Western blotting further revealed that Akt, a kinase that phosphorylates eNOS at Ser 1177 , was phosphorylated at its activation site (Ser 473 ) following 5 min of Ang‐(1–7) treatment via the PI3K‐Akt pathway. The role of MasR in this process was confirmed using the selective MasR‐antagonist A779, and by the absence of the effect in non‐transfected CHO cells. A crosstalk has been described between Ang‐(1–7)/MasR signaling and insulin/insulin receptor (IR) signaling. In brief, Ang‐(1–7)/MasR increases the expression of insulin, and induces beneficial outcomes in insulin resistance and metabolic syndrome experimental models. , , , , Furthermore, Ang‐(1–7)/MasR signaling and Insulin/IR signaling share important effectors like PI3K, Akt, GSK‐3β (glycogen synthase kinase‐3 beta), IRS‐1 (insulin receptor substrate‐1) and JAK2. , , Another important aspect of Ang‐(1–7)/MasR signaling is the inhibition of pathways activated by Ang II/AT 1 R explaining, at least in part, the counter‐regulatory effects of Ang‐(1–7) against Ang II effects (Figure ). It has been shown in different models that Ang‐(1–7)/MasR induces the dephosphorylation and inhibition of key effectors of Ang II/AT 1 R signaling including ERK1/2, c‐Src, p38 MAPK, JNK (jun N‐terminal kinase), NF‐κB, STAT3, Akt, PKC‐α, GSK‐3β, and NADPH (nicotinamide‐adenine dinucleotide phosphate). , The dephosphorylation of components of the MAPK/ERK pathway by Ang‐(1–7)/MasR involves activation of the phosphatases SHP‐2 and MKP‐1. , A work published in 2012 was the first publication and the only one thus far applying phosphoproteomics to study Ang‐(1–7)/MasR signaling. The study focused on early phosphorylation events in HAEC (up to 20 min after Ang‐(1–7) stimulation). A total of 1288 unique phosphorylation sites on 699 proteins were identified. Of these, the phosphorylation levels of 121 sites on 79 proteins were reported to change significantly in response to the treatment, thus identifying potential components of Ang‐(1–7)/MasR signaling pathways in HAEC. This study supports the potential interplay between Ang‐(1–7)/MasR signaling and insulin/IR signaling as eight of the identified phosphoproteins are also components of insulin/IR signaling: Akt, AKTS1 (proline‐rich AKT1 substrate 1), CAV1 (caveolin‐1), FOXO‐1 (forkhead box protein O1), MAPK1, PXN (paxillin), PIK3C2A (phosphatidylinositol 4‐phosphate 3‐kinase C2 domain‐containing subunit alpha), and VIM (vimentin). The shared phosphoproteins represent approximately 10% of all proteins identified as differentially phosphorylated/dephosphorylated in response to Ang‐(1–7) treatment. In this study, FOXO‐1 was selected for further confirmatory experiments. FOXO‐1 is a transcription factor that undergoes Akt‐induced phosphorylation at Thr 24 , Ser 256 , and Ser 319 . Phospho‐FOXO‐1 is localized in the cytoplasm and is translationally inactive. However, upon its dephosphorylation, FOXO‐1 is translocated into the nucleus and becomes transcriptionally active. Following 5 min of Ang‐(1–7)/MasR stimulation, a significant dephosphorylation of FOXO‐1‐Ser 256 was revealed by phosphoproteomics. Functional validation by confocal microscopy confirmed that Ang‐(1–7) led to nuclear accumulation of FOXO‐1 in HAEC. The identification of FOXO‐1 as an important downstream component of Ang‐(1–7)/MasR signaling is an example of the potential of untargeted phosphoproteomics in generating new hypotheses. As mentioned before in this review, Ang‐(1–7) induces the activation of PI3K‐Akt signaling in HAEC (Figure ). Since PI3K‐Akt signaling has been reported to lead to the phosphorylation of FOXO‐1, resulting in its inactivation and cytoplasmic accumulation, the observed dephosphorylation and nuclear accumulation was against expectations and would probably not have been found with a targeted approach (Figure ). The finding of Ang‐(1–7)/MasR induced FOXO‐1 activation by this study initiated a number of follow‐up studies investigating the role of FOXO‐1 for Ang‐(1–7)/MasR signaling and actions by hypothesis‐driven approaches. Another example of the use of MS‐based technologies for studying Ang‐(1–7)/MasR signaling is an interesting study by Hoffmann et al. in rat microvascular endothelial cells (RMVECs), which employed a combination of immunoprecipitation of MasR in native conditions to co‐precipitate its interacting proteins before and after stimulation with Ang‐(1–7) followed by the MS‐based identification of the MasR interacting proteins. A total of 50 proteins co‐precipitated with MasR including AT 1 R, mTOR, PRKD1 (serine/threonine protein kinase D1), RASGRF1 (ras‐specific guanine nucleotide‐releasing factor 1), TRPM6 (transient receptor potential cation channel subfamily M member 6), and GRIP1 (glutamate receptor‐interacting protein 1). In addition to identifying new interaction partners of the MasR, the study also confirmed heterodimerization of the MasR with the AT 1 R, which is one of several heterodimers described for RAS receptors. MasR/AT 1 R heteromerization negatively modulates Ang II/AT 1 R signaling, for example by inhibiting AT 1 R‐induced inositol phosphate generation and intracellular Ca 2+ increase. THE MrgD RECEPTOR MrgD is a member of the Mas‐related G‐protein‐coupled receptor family and of the protective axis of the RAS with Alamandine as its primary ligand. β‐alanine and GABA have been described as MrgD ligands too, though GABA is a low‐affinity MrgD agonist. A structural study of MrgD complexed with β‐alanine was recently published using cryo‐electron microscopy (Cryo‐EM). β‐alanine binds to a shallow pocket close to the extracellular loop 2 (ECL2), surrounded by TM3, TM4, TM5, and TM6 transmembrane (TM) domains. The β‐alanine/MrgD complex is stabilized by electrostatic interactions between the β‐alanine carbonyl group (C=O) with Arg 103 (TM3) and Asp 179 (TM5). Hydrogen bounds stabilize interactions of β‐alanine with Cys 164 (TM5) and Trp 241 (TM6). It is possible that Alamandine binds to the same site as β‐alanine because effects of Alamandine are abolished by a pre‐treatment with β‐alanine, suggesting that both ligands compete for the same site. However, it cannot be ruled out that Alamandine binds to a different site and that the observed β‐alanine “antagonistic” effect is due to an allosteric conformational change rather than a site competition or that Alamandine binds to the same site but with different interaction partners within the receptor pocket. Thus, an investigation of MrgD complexed with Alamandine is still warranted. As the other protective RAS receptors, AT 2 R and MasR, MrgD mediates the induction of NO production. However, at least in cardiomyocytes, the signaling mechanism leading to Alamandine/MrgD‐induced NO synthesis seems different and includes the activation of the LKB1 (serine–threonine liver kinase B1)/AMPK (AMP‐activated protein kinase) pathway in a PI3K/Akt‐independent fashion. The LKB1/AMPK pathway seems also crucial for the MrgD‐mediated prevention of the hypertrophic effect induced by Ang II/AT 1 R in neonatal rat cardiomyocytes. This observation was confirmed in an in vivo transverse aortic constriction (TAC) model of cardiac hypertrophy in mice. TAC led to the dephosphorylation of AMPK‐Thr 172 , but Alamandine via MrgD restored AMPK‐Thr 172 phosphorylation, which is consistent with AMPK activation. Other signaling pathways and cellular events associated with the cardioprotective effect induced by Alamandine/MrgD in the TAC model, as identified by conventional methods, included the dephosphorylation and consequent inhibition of ERK1/2‐Thr 202 /Tyr 204 , phosphorylation of PLN (cardiac phospholamban)‐Thr 17 , and reduced expression of MMP‐2 (matrix metallopeptidase 2). Regarding TAC‐induced ROS production, Alamandine/MrgD decreased the expression of a subunit of NADPH oxidase (gp91phox) and increased the expression of SOD2 (superoxide dismutase 2, mitochondrial) and CAT (catalase). The MrgD‐coupled signaling network induced by Alamandine was explored by untargeted phosphoproteomics complemented with antibody‐based approaches in the context of a study that investigated a potential MrgD‐dependent antiproliferative and anti‐cancer effect in the human pancreatic cancer cell lines Mia PaCa‐2 and A549 and in MrgD‐transfected CHO cells (MrgD‐CHO). Phosphoproteomics of CHO‐MrgD stimulated by Alamandine (up to 20 min) identified similar signaling pathways with potential tissue protective outcomes as the phosphoproteomics studies for the AT 2 R , and MasR, comprising the inhibition of the pathways PI3K/Akt/mTOR and BRAF/MKK/ERK1/2, as well as the activation of FOXO‐1 and p53. Of note, the phosphoproteomic experiments exploring the antitumoral effect of Alamandine in Mia PaCa‐2 cells focused on later time points (up to 48 h) than all other RAS receptor phosphoproteomics studies. These incubation times were chosen because the antiproliferative effects elicited by Alamandine were only observed after 2 days of treatment. The authors reported that Alamandine induced a significant change in the phosphorylation of proteins associated with cytoskeleton regulation, potentially reducing their capability of cellular migration. It was also reported that Alamandine/MrgD activation led to dephosphorylation and consequent inhibition of key proteins associated with cell division, such as EIF3B (eukaryotic translation initiation factor 3 subunit B) at Ser 85 /Ser 119 and EIF4B at Ser 422 /Ser 498 /Thr 500 /Ser 504 . THE AT 4 RECEPTOR/ IRAP Unlike the other RAS receptors AT 1 R, AT 2 R, MasR, and MrgD, which are seven‐transmembrane (7TM) G‐protein‐coupled receptors (GPCRs), the AT 4 R/IRAP is a transmembrane M1 zinc aminopeptidase (1TM). , The receptor has a broad tissue distribution including expression in the brain, heart, kidneys, adrenal glands, and blood vessels. Ang IV binds to the IRAP catalytic site with high affinity reducing its ability to degrade neuropeptides like vasopressin, oxytocin, kallidin, somatostatin, among others. , Classical experiments have shown that Ang IV modulates different signaling pathways depending on cell type or tissue, some of which could be inhibited by AT 1 R or AT 2 R antagonists and thereby attributed to activation of these receptors. However, the important beneficial effects of Ang IV on cognition (and others) seem AT 1 R/ AT 2 R‐independent, but AT 4 R/IRAP‐dependent. Signaling of Ang IV through IRAP is still not entirely understood and may involve effects of the accumulated IRAP substrates or direct signaling effects of IRAP. , To gain more insights into potential signaling pathways elicited by Ang IV/AT 4 R/IRAP, Wang et al. employed phosphoproteomics on N2A cells (mouse neuroblasts) treated or not with Ang IV for 30 min. In their publication, the authors focus the analysis of their data entirely on the dephosphorylation of the alpha catalytic subunit of the phosphoprotein phosphatase 1 (PP1α‐Thr 320 ), which is an activation mechanism. In line with that, PP1α downstream substrates were found dephosphorylated, suggesting its important role in signaling in neuronal cells. Finally, the authors observed Ang IV‐induced G1/S cell arrest, which they attributed to the increased activity of PPP1α. COMMON RAS SIGNALING COMPONENTS Even though the number of phosphoproteome studies investigating RAS receptor signaling is still limited, it is, nevertheless, striking that the studies using untargeted approaches looking at receptors of the protective axis of the RAS identified widely similar signaling pathways thus creating a kind of a “déjà vu” experience. Analyzing four different phosphoproteome datasets from AT 2 R, , MasR, and MrgD, we observed a remarkable overlap of regulated phosphorylation events in response to short‐term agonist stimulation. Figure illustrates some key signaling effectors shared by MasR, AT 2 R and MrgD according to the phosphoproteomics studies. For example, activation of all three receptors induced: FOXO‐1 dephosphorylation and consequent activation, p53 dephosphorylation and consequent activation, HDAC dephosphorylation and consequent inhibition, and ERK1/2 dephosphorylation and consequent inhibition. Akt and AKT1S1 (proline‐rich Akt1 substrate 1) were phospho‐modified in the same way by the MasR and the AT 2 R (phosphorylation/activation of Akt; dephosphorylation / inhibition of AKT1S1) whereas MrgD activation induced opposing effects (dephosphorylation / inhibition of Akt; phosphorylation/activation of AKT1S1). AMPK phosphorylation / activation was only observed for MrgD and AT 2 R (but not MasR) signaling, whereas MAPK1 dephosphorylation/inhibition was only detected for AT 2 R and MasR signaling. Surprisingly, C21‐induced AT 2 R activation led to ERK1/2 phosphorylation and consequent activation, while Ang‐(1–5)‐induced AT 2 R activation led to ERK1/2 dephosphorylation and consequent inhibition. However, C21‐induced ERK1/2 activation happened very early (after 1 min), whereas Ang‐(1–5)‐induced ERK1/2 inhibition occurred only after 20 min, which may indicate that these events are not part of the same signaling pathway and biological process. ERK1/2 activation can mediate a multitude of different biological effects such as phosphatase activation (a potentially protective mechanism) at very early time points or promotion of pro‐inflammatory and pro‐fibrotic pathways at later time points. WHAT TO CONSIDER WHEN DOING PHOSPHOPROTEOMICS 10.1 Cell lines and animal models Untargeted phosphoproteomics relies on protein databases to identify (phospho)‐proteins in samples. There are two main types of protein databases: those containing unreviewed proteins (e.g., UniProtKB/TrEMBL) and those with reviewed proteins (e.g., UniProtKB/Swiss‐Prot). Unreviewed proteins are “computationally annotated”, while reviewed proteins are “manually annotated”, which is preferable since the results are more reliable. As of September 2024, the UniProtKB/Swiss‐Prot database included 26821 reviewed proteins from Homo sapiens (human), 17823 from Mus musculus (mouse), 8304 from Rattus norvegicus (rat), and 247 from Cricetulus griseus (Chinese hamster). Thus, the choice of cell lines and animal models can significantly impact (phospho)‐proteomics results, since the size of reference databases differs between species. Therefore, the choice of species is critical, and samples from humans or mice are generally preferred over other species for (phospho)‐proteomic studies. However, samples from less commonly used species can still be valuable under certain circumstances. For example, the CHO cell line originating from Chinese hamster ( C. griseus ) is often used for transfection and expression of RAS receptors (MasR, AT 1 R, AT 2 R, or MrgD) because it does not constitutively express these receptors, which means that non‐transfected cells can serve as perfect negative controls. Rat models such as spontaneous hypertensive rats (SHR) and transgenic rats are also widely employed in RAS research and often the optimal model for studying certain diseases. For species with a limited number of annotated proteins in a reviewed database, researchers may use the UniProtKB/TrEMBL database of unreviewed proteins. As of September 2024, it contained 83438 proteins for C. griseus and 100383 for R. norvegicus . However, the fact that these proteins are only computationally annotated needs to be kept in mind, conclusions done with more caution and where possible validated by additional experiments. For phosphoproteome studies, availability of data about the role of phosphorylation/dephosphorylation of certain residues (e.g., whether phosphorylation leads to activation or inactivation of a protein) is even more limited, though there are specific databases like the PhosphoSitePlus database ( https://www.phosphosite.org ) that can be used to interrogate specific phosphorylation sites. There are also algorithms that use experimental datasets to predict active kinases (e.g., KSTAR ) and active signaling pathways (e.g., phuEGO ). Nevertheless, interpretation of untargeted phosphoproteomic data can be difficult, and it may be necessary to limit follow‐up studies to only those identified phosphoproteins for which information is available in databases. 10.2 Selectivity of ligands Phosphoproteomics as reviewed in this article serves to unravel signaling mechanisms induced by the activation of a receptor by a respective agonist. Since phosphoproteomics is a highly sensitive technique, it is crucial to verify in advance whether the agonist to be used is highly selective for the targeted receptor. Since ligand selectivity is also a matter of dosing (every ligand loses selectivity at some point when increasing the dose/concentration), it is also essential to choose a dose/concentration for the agonist at which the agonist binds to and activates exclusively the target of interest. Data on selectivity of a certain ligand often only exist for a restricted number of potential off‐targets—if at all. Therefore, there will always be some remaining uncertainty whether all observed effects can really be attributed to the interaction of the agonist with the target of interest. Thus, control experiments, for example with antagonists or in cells/animals, which do not express the receptor of interest, are essential to control for off‐target effects. Cell lines and animal models Untargeted phosphoproteomics relies on protein databases to identify (phospho)‐proteins in samples. There are two main types of protein databases: those containing unreviewed proteins (e.g., UniProtKB/TrEMBL) and those with reviewed proteins (e.g., UniProtKB/Swiss‐Prot). Unreviewed proteins are “computationally annotated”, while reviewed proteins are “manually annotated”, which is preferable since the results are more reliable. As of September 2024, the UniProtKB/Swiss‐Prot database included 26821 reviewed proteins from Homo sapiens (human), 17823 from Mus musculus (mouse), 8304 from Rattus norvegicus (rat), and 247 from Cricetulus griseus (Chinese hamster). Thus, the choice of cell lines and animal models can significantly impact (phospho)‐proteomics results, since the size of reference databases differs between species. Therefore, the choice of species is critical, and samples from humans or mice are generally preferred over other species for (phospho)‐proteomic studies. However, samples from less commonly used species can still be valuable under certain circumstances. For example, the CHO cell line originating from Chinese hamster ( C. griseus ) is often used for transfection and expression of RAS receptors (MasR, AT 1 R, AT 2 R, or MrgD) because it does not constitutively express these receptors, which means that non‐transfected cells can serve as perfect negative controls. Rat models such as spontaneous hypertensive rats (SHR) and transgenic rats are also widely employed in RAS research and often the optimal model for studying certain diseases. For species with a limited number of annotated proteins in a reviewed database, researchers may use the UniProtKB/TrEMBL database of unreviewed proteins. As of September 2024, it contained 83438 proteins for C. griseus and 100383 for R. norvegicus . However, the fact that these proteins are only computationally annotated needs to be kept in mind, conclusions done with more caution and where possible validated by additional experiments. For phosphoproteome studies, availability of data about the role of phosphorylation/dephosphorylation of certain residues (e.g., whether phosphorylation leads to activation or inactivation of a protein) is even more limited, though there are specific databases like the PhosphoSitePlus database ( https://www.phosphosite.org ) that can be used to interrogate specific phosphorylation sites. There are also algorithms that use experimental datasets to predict active kinases (e.g., KSTAR ) and active signaling pathways (e.g., phuEGO ). Nevertheless, interpretation of untargeted phosphoproteomic data can be difficult, and it may be necessary to limit follow‐up studies to only those identified phosphoproteins for which information is available in databases. Selectivity of ligands Phosphoproteomics as reviewed in this article serves to unravel signaling mechanisms induced by the activation of a receptor by a respective agonist. Since phosphoproteomics is a highly sensitive technique, it is crucial to verify in advance whether the agonist to be used is highly selective for the targeted receptor. Since ligand selectivity is also a matter of dosing (every ligand loses selectivity at some point when increasing the dose/concentration), it is also essential to choose a dose/concentration for the agonist at which the agonist binds to and activates exclusively the target of interest. Data on selectivity of a certain ligand often only exist for a restricted number of potential off‐targets—if at all. Therefore, there will always be some remaining uncertainty whether all observed effects can really be attributed to the interaction of the agonist with the target of interest. Thus, control experiments, for example with antagonists or in cells/animals, which do not express the receptor of interest, are essential to control for off‐target effects. REMAINING KNOWLEDGE GAPS Although the above‐reviewed MS‐based phosphoproteome studies provided major insights into RAS‐associated signaling mechanisms, some “puzzle stones” are still missing for a global understanding of the RAS signaling networks. For AT 1 R signaling, for example, none of the phosphoproteomic studies used cells which endogenously express AT 1 R. However, AT 1 R signaling patterns have been thoroughly characterized by low‐throughput techniques (reviewed elsewhere ref ) using cells or tissues endogenously expressing the receptor, and most of the findings from phosphoproteomics in transfected cells are in concordance with findings from these low‐throughput studies. Whether, and to which extent, additional signaling mechanisms identified in the phosphoproteomics studies using transfected cells, which are not “backed up” by conventional studies, are also relevant in models endogenously expressing AT 1 Rs remains to be investigated. To date, phosphoproteome‐based studies of RAS signaling have primarily relied on simplified systems such as primary cells (e.g., HAEC) or transfected cell lines expressing specific receptors (e.g., CHO‐MrgD, CHO‐AT 2 R). While these models provide a controlled environment to dissect receptor‐specific pathways and downstream effectors, they lack the physiological complexity. Investigating RAS signaling in more complex systems, such as whole organisms or tissue‐specific models would provide critical insights into the biological relevance of these signaling pathways. Such studies could determine whether the effectors identified in vitro are similarly modulated in vivo, where the interplay of multiple cell types, tissue environments, and systemic factors could influence the signaling dynamics. Moreover, in vivo phosphoproteomics could reveal novel effectors and pathway regulations that are not evident in isolated cell models, advancing our understanding of RAS biology and its role in health and disease. Phosphoproteomics are a potential tool for comparing “shared” versus “unique” signaling patterns in different cell types/conditions. For example, Schenk and coworkers reported for AT 1 Rs substantial differences between Ang II‐induced signaling in HEK versus AB8/13 cells, both with exogenous AT 1 R expression. The same approach could also be used in cells/organisms with endogenous AT 1 R expression to unveil system bias (differences in signaling between different cells/tissues) or differences in signaling between normal and diseased conditions. Furthermore, the use of biased AT 1 R agonists in this setup would allow distinguishing between G‐protein‐ and β‐arrestin‐dependent signaling patterns involved in physiological processes in different cells and/or in the progression of diseased states. What is indeed still much warranted is the characterization of the signaling pathways elicited either by G‐protein‐ or by β‐arrestin‐biased ligands in systems endogenously expressing the AT 1 R. Such research has been hampered in the past by the unavailability of the respective biased AT 1 R agonists. The G‐protein‐biased AT 1 agonist TRV055 became only recently available (first publication in 2019 ). β‐arrestin‐biased AT 1 R agonists have been available for longer with the first, [[Sar 1 ,Ile 4 ,Ile 8 ]Ang II (SII Ang II)], published in 2003. Therefore, the initial approach to study G‐protein‐coupled versus β‐arrestin‐coupled AT 1 R signaling was a comparison of signaling cascades elicited by the balanced full agonist Ang II with those elicited by the β‐arrestin‐biased partial agonist SII Ang II. , In this approach, the overlapping signaling components represent β‐arrestin‐dependent signaling pathways, whereas signaling components activated by Ang II only (but not by SII Ang II) constitute G‐protein‐dependent signaling pathways. As SII Ang II is a low‐affinity, partial β‐arrestin‐biased AT 1 R agonist with some residual G‐protein activation capability that becomes apparent particularly in AT 1 R overexpressing cells, and since AT 1 R overexpressing cells have been the standard model for studies on biased AT 1 R signaling so far, it is likely that existing data on AT 1 R β‐arrestin‐dependent signaling have some inaccuracies. Thus, a systematic phosphoproteomic investigation of cells with endogenous AT 1 R expression treated with the now available optimized biased AT 1 R agonists such as TRV055 (for G‐protein‐biased signaling) and TRV027 (for β‐arrestin‐biased signaling) would accurately characterize AT 1 R signaling through the two major receptor activation mechanisms. Another area which has hardly been investigated is the characterization of signaling pathways elicited by RAS receptor heterodimers. RAS receptors form heterodimers with other receptors of the RAS (e.g., AT 1 R‐AT 2 R, AT 2 R‐Mas) or with non‐RAS receptors (e.g., AT 1 R‐B 2 bradykinin B2 receptor), AT 1 R‐β‐adrenergic receptors. This is important because heterodimerization can change receptor conformations and, thereby, receptor signaling. This has potential clinical relevance, for example due to the phenomenon of cross‐inhibition, which means that one antagonist (e.g., an ARB) inhibits signaling of the dimerized other receptor (e.g., a β1‐adrenergic receptor). The AT 2 R, MasR and MrgD have been described to be constitutively active, i.e., they elicit intracellular signaling on a low level without agonist binding. A further potential area of phosphoproteomics could be to determine whether constitutive signaling patterns differ from agonist‐induced signaling. Phosphoproteomic‐based studies on the signaling mechanisms elicited by several RAS components including (pro‐)renin/PRR, Ang‐(1–12), Ang‐(1–9), Ang A, and Ala‐(1–5) have not been performed yet. For some of these components, detailed knowledge of the signaling mechanisms may also help to identify the responsible receptor. Such studies may also clarify whether biased agonism only exists for the AT 1 R, that is, the classical arm of the RAS, or whether it can be found in receptors of the protective RAS as well. Finally, our understanding of RAS signaling could be significantly advanced through the integration of multi‐omics approaches. For instance, while phosphoproteomics offers critical insights into phosphorylation events and their roles in signaling cascades, examining other PTMs (e.g., glycosylation, methylation, and acetylation) could provide a more comprehensive view of the molecular mechanisms underlying RAS activity (e.g., regulation of gene expression and epigenetics). Furthermore, combining phosphoproteomics with metabolomics and lipidomics could reveal how RAS signaling pathways interact with cellular metabolism. From an in vivo perspective, recent advancements in single‐cell transcriptomics and single‐cell proteomics offer unprecedented opportunities to study RAS signaling at the resolution of individual cells. These techniques enable the characterization of cell‐type‐specific signaling dynamics and the identification of heterogeneous responses to RAS stimuli within complex tissues. CONCLUSIONS Phosphoproteomics is a powerful technique for quantifying phosphorylation events in an unbiased manner and has proven invaluable for studying signaling pathways across numerous receptor systems. However, its application in the context of RAS‐related signaling pathways remains surprisingly underexplored. There is significant potential to utilize phosphoproteomics for investigating the signaling cascades of emerging RAS components, such as Ang‐(1–5) and Ala‐(1–5), to study biased agonism within the RAS, and to explore how heterodimerization of RAS receptors impacts cellular signaling networks. With recent advancements enabling the identification of tens of thousands of phosphorylation sites per experiment, a comprehensive re‐examination of RAS receptor signaling is warranted, as new effectors and regulatory mechanisms are likely to emerge. Moreover, extensive datasets containing thousands of phosphorylated proteins modulated by RAS hormones are available in public repositories (e.g., PRIDE, Peptide Atlas, MassIVE, iProX) through the ProteomeXchange Consortium ( https://www.proteomexchange.org ). These datasets are often only partially analyzed in the original studies and, therefore, can be regarded as “goldmines” which offer opportunities for re‐analysis or meta‐analysis to identify signaling effectors which were previously overlooked or not explored in detail in the original studies. By revisiting these datasets with focused questions, researchers can extract valuable new insights from the data, broadening our understanding of RAS biology and potentially uncovering novel therapeutic targets. Igor Maciel Souza‐Silva: Conceptualization; writing – original draft; writing – review and editing. Victor Corasolla Carregari: Writing – original draft. U. Muscha Steckelings: Conceptualization; funding acquisition; writing – original draft; writing – review and editing; supervision. Thiago Verano‐Braga: Conceptualization; funding acquisition; writing – review and editing; writing – original draft; supervision. T.V.‐B. received funding from CNPq (406936/2023‐4; 309965/2022‐5), CAPES‐Finance Code 001 (88881.700905/2022‐01; 88887.916694/2023‐00), and FAPEMIG (BPD‐00133‐22). U.M.S. received funding from the Danish Council for Independent Research (4004‐00485B, 0134‐00297B) and the Novo Nordisk Foundation (6239, 0058592). The authors declare no conflict of interest.
Optimal dose of mivacurium for laser-assisted laryngeal microsurgery: a pharmacokinetic study using closed-loop target-controlled infusion
341c6615-b2e5-49e3-9489-57ea8ae9afc0
11736911
Microsurgery[mh]
Participants A total of 67 patients diagnosed with vocal cord tumours scheduled for general anaesthesia-supported laser-assisted laryngeal microsurgery were recruited from the Department of Otolaryngology at the Fourth Hospital of Hebei Medical University between January 2015 and September 2015. This prospective randomized clinical trial adhered to the ethical principles of the World Medical Association Declaration of Helsinki . The local medical ethics review committee of the Fourth Hospital of Hebei Medical University approved the patients’ selection for this study. This study was registered under the clinical trial registry of Hebei Medical University with registration number No. 2016031. All participants provided written informed consent. Inclusion and exclusion criteria The inclusion criteria for the study were as follows: patients aged between 18 and 65 years, of any gender, with American Society of Anesthesiologists (ASA) grade I–II and without neurological or muscular disorders. The exclusion criteria comprised patients with severe cardiopulmonary diseases, abnormal liver or kidney function, poorly controlled hypertension or diabetes, neuromuscular diseases, skin damage and a history of asthma and malignant arrhythmias. Individuals with previous surgery at the site of neuromuscular monitoring or who were administered medications affecting neuromuscular transmission within the past 3 months were excluded. Obese participants with body mass index (BMI) > 30 kg m –2 as well as pregnant or lactating women were also excluded. All patients who experienced adverse events such as a difficult airway, severe arrhythmias, hypotension, excessive bleeding, asthma, or severe allergic reactions during the procedure, which prevented the scheduled laser-assisted laryngeal microsurgery, were also excluded. Two patients were excluded for having a BMI greater than 30 kg m –2 , while 1 patient was excluded because the surgery was cancelled. In addition, 4 patients withdrew from the study due to incomplete intraoperative data collection. The final analysis was conducted based on the data collected from 60 patients ( ). Randomization In this study, a randomized controlled trial design was adopted. Patients were assigned numbers based on the order of their treatment, and random numbers were generated by the statistical software SPSS. Subsequently, the enrolled patients were allocated to 3 equal groups, namely C1, C2 and C3, through randomization. Sample size The sample size for the present experiment was estimated using the Power Analysis and Sample Size (PASS) 15.0 software (NCSS LLC., Kaysville, Utah, USA). The test level a was set at 0.05, and the test power 1–β was 0.8. Participants were categorized into three groups in this study. Using the recovery index as the primary observation metric, the mean and standard deviation for each group were estimated based on both existing literature and a preliminary experiment. According to the software calculations, the minimum sample size required was 19 cases per group, leading to a total of 57 subjects. In order to account for potential dropouts during the trial, the decision was made to have 20 cases in each group, resulting in a total of 60 cases. Anaesthesia methods Once patients entered the operating theatre, a catheter for intravenous administration was esta-blished in the right hand and 0.9% sodium chloride was infused before induction. The patients’ vital signs, including blood pressure, heart rate (HR), blood oxygen saturation (SpO 2 ), and bispectral index (BIS), were monitored. Oxygen was delivered with a flow rate of 4–6 L min –1 via a standard mask to provide oxygenation and facilitate nitrogen washout. Remifentanil (1.5 μg kg –1 ) and propofol (2.0 mg kg –1 ) were used to induce anaesthesia intravenously. Once the Modified Observer’s Alertness/Sedation scale (MOAA/S) score of patients reached zero, a target-controlled infusion of mivacurium was administered at a dose of 0.2 mg kg –1 in all groups. This standardized approach was implemented in order to ensure uniformity in the initial conditions across all groups, and satisfactory intubation conditions were achieved at TOF 0. After achieving maximum suppression at the first twitch (T1), an enhanced endotracheal tube was inserted under direct laryngoscopic guidance to secure the airway. Following intubation, mechanical ventilation was initiated using an anaesthesia machine in volume-controlled mode. The volume-controlled ventilation settings included a fresh gas flow rate of 2 L min –1 , an FiO 2 of 60%, a tidal volume of 6–8 mL kg –1 and a respiratory rate of 12–14 breaths/min to maintain EtCO 2 levels in the range 35–45 mmHg. Anaesthesia maintenance was achieved through a continuous infusion of propofol at a concentration of 2.5–3.5 mg kg –1 and remifentanil at a rate of 0.2–0.3 μg kg –1 to maintain BIS values between 45 and 60. The infusion of these anaesthetic agents was halted at the end of the surgery. Mivacurium was administered using initial target-controlled infusions of 6 μg kg –1 min –1 in group C1, 7 μg kg –1 min –1 in group C2, and 8 μg kg –1 min –1 in group C3, with a dose escalation rate of 10 μg kg –1 min –1 when T1 recovered to 1%. When the TOF ratio was ≥ 0.9, the endotracheal tube was withdrawn and mivacurium was discontinued. If the patient’s HR decreased to < 50 beats min –1 during the procedure, 0.5 mg of atropine was administered. When the mean arterial pressure (MAP) dropped below 65 mmHg, ephedrine was admini-stered at a dose of 5–10 mg. In the event of significant adverse reactions such as pronounced erythema, bronchospasm, severe arrhythmias, or blood pressure fluctuations during the injection process, the trial was discontinued. A clinical criterion was used to determine the time of extubation by the anaes-thesiologist . Neuromuscular monitoring and recording methods After the patient entered the operating room, a closed-loop target-controlled infusion Willy Ark Concert-CL (Guangxi VERYARK Technology Co., Ltd, CHINA) was connected to the left upper forearm to monitor the neuromuscular conduction function and control the infusion rates. The TOF stimulation was applied at an interval of 20 s between each series of stimuli. T1 calibration was performed, and recordings of TOF stimulation were obtained while the patient was awake. The following data were recorded after drug administration: (1) onset time of mivacurium (time from completion of drug administration to maximum inhibition at the threshold), non-response time, recovery index (RI) (time for T4/T1 to recover from 25% to 75%), (2) supplementary doses of mivacurium, (3) vocal cord movements observed during the operation, and (4) adverse reactions such as skin rash, bronchospasm, and arrhythmia. Statistical analysis A statistical analysis was conducted using IBM SPSS 25.0 and GraphPad Prism 9.0. For normally distributed measurements, data were expressed as mean ± standard deviation (SD). One-way analysis of variance (ANOVA) was employed to compare multiple groups. When the assumption of normal distribution was not met, the rank sum test was used, and the results were expressed as the me-dian with the 25 th and 75 th percentiles: M (P25, P75). The t -test was utilized to compare two groups, with corrections made when the variances were non-homogeneous. Enumeration data were presented as percentages. The chi-square (c 2 ) test was used for comparisons between multiple groups. Statistical significance in the screening difference was the independent variable while the dependent variable was drug supplementation. For further binary logistic regression analysis, the receiver operating characteristic (ROC) method was employed to determine the optimal cut-off value when the independent variable was continuous. Pearson’s correlation coefficient was used to examine the relationship between the total amount of medication and the recovery index, age and the intensity of the stimulation. Multiple linear regression models were employed to identify the factors that influence the recovery indexes. A significance level of P < 0.05 was considered statistically significant. A total of 67 patients diagnosed with vocal cord tumours scheduled for general anaesthesia-supported laser-assisted laryngeal microsurgery were recruited from the Department of Otolaryngology at the Fourth Hospital of Hebei Medical University between January 2015 and September 2015. This prospective randomized clinical trial adhered to the ethical principles of the World Medical Association Declaration of Helsinki . The local medical ethics review committee of the Fourth Hospital of Hebei Medical University approved the patients’ selection for this study. This study was registered under the clinical trial registry of Hebei Medical University with registration number No. 2016031. All participants provided written informed consent. The inclusion criteria for the study were as follows: patients aged between 18 and 65 years, of any gender, with American Society of Anesthesiologists (ASA) grade I–II and without neurological or muscular disorders. The exclusion criteria comprised patients with severe cardiopulmonary diseases, abnormal liver or kidney function, poorly controlled hypertension or diabetes, neuromuscular diseases, skin damage and a history of asthma and malignant arrhythmias. Individuals with previous surgery at the site of neuromuscular monitoring or who were administered medications affecting neuromuscular transmission within the past 3 months were excluded. Obese participants with body mass index (BMI) > 30 kg m –2 as well as pregnant or lactating women were also excluded. All patients who experienced adverse events such as a difficult airway, severe arrhythmias, hypotension, excessive bleeding, asthma, or severe allergic reactions during the procedure, which prevented the scheduled laser-assisted laryngeal microsurgery, were also excluded. Two patients were excluded for having a BMI greater than 30 kg m –2 , while 1 patient was excluded because the surgery was cancelled. In addition, 4 patients withdrew from the study due to incomplete intraoperative data collection. The final analysis was conducted based on the data collected from 60 patients ( ). In this study, a randomized controlled trial design was adopted. Patients were assigned numbers based on the order of their treatment, and random numbers were generated by the statistical software SPSS. Subsequently, the enrolled patients were allocated to 3 equal groups, namely C1, C2 and C3, through randomization. The sample size for the present experiment was estimated using the Power Analysis and Sample Size (PASS) 15.0 software (NCSS LLC., Kaysville, Utah, USA). The test level a was set at 0.05, and the test power 1–β was 0.8. Participants were categorized into three groups in this study. Using the recovery index as the primary observation metric, the mean and standard deviation for each group were estimated based on both existing literature and a preliminary experiment. According to the software calculations, the minimum sample size required was 19 cases per group, leading to a total of 57 subjects. In order to account for potential dropouts during the trial, the decision was made to have 20 cases in each group, resulting in a total of 60 cases. Once patients entered the operating theatre, a catheter for intravenous administration was esta-blished in the right hand and 0.9% sodium chloride was infused before induction. The patients’ vital signs, including blood pressure, heart rate (HR), blood oxygen saturation (SpO 2 ), and bispectral index (BIS), were monitored. Oxygen was delivered with a flow rate of 4–6 L min –1 via a standard mask to provide oxygenation and facilitate nitrogen washout. Remifentanil (1.5 μg kg –1 ) and propofol (2.0 mg kg –1 ) were used to induce anaesthesia intravenously. Once the Modified Observer’s Alertness/Sedation scale (MOAA/S) score of patients reached zero, a target-controlled infusion of mivacurium was administered at a dose of 0.2 mg kg –1 in all groups. This standardized approach was implemented in order to ensure uniformity in the initial conditions across all groups, and satisfactory intubation conditions were achieved at TOF 0. After achieving maximum suppression at the first twitch (T1), an enhanced endotracheal tube was inserted under direct laryngoscopic guidance to secure the airway. Following intubation, mechanical ventilation was initiated using an anaesthesia machine in volume-controlled mode. The volume-controlled ventilation settings included a fresh gas flow rate of 2 L min –1 , an FiO 2 of 60%, a tidal volume of 6–8 mL kg –1 and a respiratory rate of 12–14 breaths/min to maintain EtCO 2 levels in the range 35–45 mmHg. Anaesthesia maintenance was achieved through a continuous infusion of propofol at a concentration of 2.5–3.5 mg kg –1 and remifentanil at a rate of 0.2–0.3 μg kg –1 to maintain BIS values between 45 and 60. The infusion of these anaesthetic agents was halted at the end of the surgery. Mivacurium was administered using initial target-controlled infusions of 6 μg kg –1 min –1 in group C1, 7 μg kg –1 min –1 in group C2, and 8 μg kg –1 min –1 in group C3, with a dose escalation rate of 10 μg kg –1 min –1 when T1 recovered to 1%. When the TOF ratio was ≥ 0.9, the endotracheal tube was withdrawn and mivacurium was discontinued. If the patient’s HR decreased to < 50 beats min –1 during the procedure, 0.5 mg of atropine was administered. When the mean arterial pressure (MAP) dropped below 65 mmHg, ephedrine was admini-stered at a dose of 5–10 mg. In the event of significant adverse reactions such as pronounced erythema, bronchospasm, severe arrhythmias, or blood pressure fluctuations during the injection process, the trial was discontinued. A clinical criterion was used to determine the time of extubation by the anaes-thesiologist . After the patient entered the operating room, a closed-loop target-controlled infusion Willy Ark Concert-CL (Guangxi VERYARK Technology Co., Ltd, CHINA) was connected to the left upper forearm to monitor the neuromuscular conduction function and control the infusion rates. The TOF stimulation was applied at an interval of 20 s between each series of stimuli. T1 calibration was performed, and recordings of TOF stimulation were obtained while the patient was awake. The following data were recorded after drug administration: (1) onset time of mivacurium (time from completion of drug administration to maximum inhibition at the threshold), non-response time, recovery index (RI) (time for T4/T1 to recover from 25% to 75%), (2) supplementary doses of mivacurium, (3) vocal cord movements observed during the operation, and (4) adverse reactions such as skin rash, bronchospasm, and arrhythmia. A statistical analysis was conducted using IBM SPSS 25.0 and GraphPad Prism 9.0. For normally distributed measurements, data were expressed as mean ± standard deviation (SD). One-way analysis of variance (ANOVA) was employed to compare multiple groups. When the assumption of normal distribution was not met, the rank sum test was used, and the results were expressed as the me-dian with the 25 th and 75 th percentiles: M (P25, P75). The t -test was utilized to compare two groups, with corrections made when the variances were non-homogeneous. Enumeration data were presented as percentages. The chi-square (c 2 ) test was used for comparisons between multiple groups. Statistical significance in the screening difference was the independent variable while the dependent variable was drug supplementation. For further binary logistic regression analysis, the receiver operating characteristic (ROC) method was employed to determine the optimal cut-off value when the independent variable was continuous. Pearson’s correlation coefficient was used to examine the relationship between the total amount of medication and the recovery index, age and the intensity of the stimulation. Multiple linear regression models were employed to identify the factors that influence the recovery indexes. A significance level of P < 0.05 was considered statistically significant. There were no statistically significant differences in age, weight, height, BMI or comorbidities among the three groups ( ). During the intubation process, there was clear visualization of the vocal cords and minimal patient discomfort. All patients were successfully intubated, and no body movements were observed during the procedure. During the induction process, 3 patients experienced transient redness of the skin at the injection site (arm) with an incidence rate of 5%. There were no instances of airway spasms or significant fluctuations in blood pressure or heart rate. After the surgical procedure, the patients regained consciousness and demonstrated good respiratory functions. Neuromuscular monitoring The stimulus current intensity during T1 calibration was 33.86 ± 4.15 mA for male patients and 33.00 ± 5.20 mA for female patients. This difference was not statistically significant ( P > 0.05; ). Furthermore, no statistically significant correlation was found between age and the intensity of stimulation currents ( P = 0.266). Pharmacodynamic indicators of mivacurium During the anaesthesia induction process, the mean onset time of mivacurium target-controlled infusion at a dose of 0.2 mg kg –1 was 4.37 ± 0.83 minutes. There was no statistically significant difference in the onset time of mivacurium among the three groups of patients. The mean no-response time was 3.94 ± 1.01 minutes and was not statistically significantly different among the three groups. Fifteen patients (75%) in group C1, 10 patients (50%) in group C2, and 3 patients (15%) in group C3 required the administration of additional mivacurium during surgery. The differences between group C3 and group C1, as well as between group C3 and group C2, were statistically significant ( P < 0.05; ). In group C1, when T1 reached 0, two patients (10%) experienced vocal cord immobility during the surgery, whereas no patients in groups C2 and C3 exhibited this symptom. However, there was no statistically significant difference between group C1 and group C2 ( P > 0.05; ). TOF75% and TOF90% were not significantly different between the three patient groups. The mean RI for the three groups of patients was 7.29 ± 2.54 minutes, and the difference among the three groups was not statistically significant ( P > 0.05; ). The analysis of independent variables using univariate and multivariate binary logistic regression showed that RI had no relationship with age, gender, BMI, onset of muscle relaxation, or no-response time. However, an RI greater than 5.68 was correlated with a reduced need for additional intraoperative mivacurium dosage ( P < 0.05; ). When RI was set as the dependent variable in multiple linear regression, it was not linearly correlated with age, sex, BMI, or muscle relaxation onset time, but it was linearly correlated with no-response time and the total dose of mivacurium ( F = 19.585, P < 0.001, R 2 = 0.689; ). Pearson’s correlation analysis revealed a significant positive correlation ( r = 0.7838) between the RI and total mivacurium dosage in all patient groups ( P < 0.001, 95% CI: 0.6617–0.8655; ). The stimulus current intensity during T1 calibration was 33.86 ± 4.15 mA for male patients and 33.00 ± 5.20 mA for female patients. This difference was not statistically significant ( P > 0.05; ). Furthermore, no statistically significant correlation was found between age and the intensity of stimulation currents ( P = 0.266). During the anaesthesia induction process, the mean onset time of mivacurium target-controlled infusion at a dose of 0.2 mg kg –1 was 4.37 ± 0.83 minutes. There was no statistically significant difference in the onset time of mivacurium among the three groups of patients. The mean no-response time was 3.94 ± 1.01 minutes and was not statistically significantly different among the three groups. Fifteen patients (75%) in group C1, 10 patients (50%) in group C2, and 3 patients (15%) in group C3 required the administration of additional mivacurium during surgery. The differences between group C3 and group C1, as well as between group C3 and group C2, were statistically significant ( P < 0.05; ). In group C1, when T1 reached 0, two patients (10%) experienced vocal cord immobility during the surgery, whereas no patients in groups C2 and C3 exhibited this symptom. However, there was no statistically significant difference between group C1 and group C2 ( P > 0.05; ). TOF75% and TOF90% were not significantly different between the three patient groups. The mean RI for the three groups of patients was 7.29 ± 2.54 minutes, and the difference among the three groups was not statistically significant ( P > 0.05; ). The analysis of independent variables using univariate and multivariate binary logistic regression showed that RI had no relationship with age, gender, BMI, onset of muscle relaxation, or no-response time. However, an RI greater than 5.68 was correlated with a reduced need for additional intraoperative mivacurium dosage ( P < 0.05; ). When RI was set as the dependent variable in multiple linear regression, it was not linearly correlated with age, sex, BMI, or muscle relaxation onset time, but it was linearly correlated with no-response time and the total dose of mivacurium ( F = 19.585, P < 0.001, R 2 = 0.689; ). Pearson’s correlation analysis revealed a significant positive correlation ( r = 0.7838) between the RI and total mivacurium dosage in all patient groups ( P < 0.001, 95% CI: 0.6617–0.8655; ). In this study, there were no statistically significant differences in patient demographics or adverse reactions during anaesthesia induction. Only 5% of patients exhibited transient skin redness at the injection site during the induction process. Neuromuscular monitoring revealed that the intensity of stimulation remained consistent across genders and was not correlated with age. With higher mivacurium infusion dosage, a shorter muscle relaxa-tion onset time, longer no-response time, quicker offset speed of neuromuscular block and slower complete recovery of neuromuscular function were observed. However, these factors were not statistically significantly different among the 3 groups. It is noteworthy that group C1 required a significantly larger amount of additional mivacurium during the surgery. In addition, 10% of patients in group C1 experienced vocal cord immobility when T1 reached 0, although this difference was not statistically significant. Age, gender, BMI, and onset of muscle relaxation did not significantly influence the RI. An RI exceeding 5.68 minutes had a protective effect against the need for additional mivacurium administration to maintain muscle relaxation. Moreover, RI was significantly associated with no-response time and the total dose of mivacurium administration. In addition, higher mivacurium dosages were linked to faster muscle recovery. Mivacurium is characterized by its distinctive chemical structure, which combines the bisbenzyltetrahydroisoquinoline framework of atracurium with an ester linkage in succinylcholine. This composition makes mivacurium susceptible to enzymatic degradation, resulting in certain drawbacks that are associated with both of its structural components. These include a delayed onset of action and a tendency to induce allergic reactions . In the present study, we administered 0.2 mg kg –1 of mivacurium for anaesthesia induction, with an average onset time of 4.48 ± 1.03 minutes to achieve adequate muscle relaxation for endotracheal tube placement. As a result, all 60 patients achieved favourable intubation conditions and underwent successful tracheal intubation. A meta-analysis revealed that higher induction doses of mivacurium and prolonged time to intubation were linked to a greater probability of achieving optimal intubation conditions . However, while an induction dose of 0.25 to 0.3 mg kg –1 can reduce the onset time to 1–1.5 minutes, it carries the risk of hypotension due to histamine release . On the other hand, a delayed intubation time following mivacurium induction enables a more potent effect on muscle relaxation by increasing its accumulation at the neuromuscular junction, which intensifies its inhibitory effect . In this study, only 3 patients experienced transient skin redness at the injection site during induction, which might be attributable to histamine release, local irritation, or allergic sensitivity. However, no cases of facial flushing, reflex tachycardia, or hypotension were reported. Histamine release is a significant adverse effect associated with mivacurium, potentially leading to unstable haemodynamics and thereby restricting its practical utility . Neuromuscular blocking agents are typically associated with type 1 hypersensitivity reactions, also known as immediate hypersensitivity. These reactions are characterized by the rapid release of proteases, including tryptase, preformed histamine and newly synthesized vasoactive mediators, from basophils and mast cells triggered by cross-linking of muscle relaxants with membrane-bound IgE . The release of these mediators can lead to skin reactions, bronchospasm and cardiovascular symptoms . It was previously reported that mivacurium induced histamine release when administered at high doses of 0.20 mg kg −1 or more, equivalent to more than three times the ED95, and with rapid administered in less than 30 seconds . Therefore, a target-controlled infusion of mivacurium, characterized by a controlled and gradual delivery, might mitigate histamine release and minimize the risk of allergic reactions compared to rapid bolus injections, which involve the administration of larger doses at a faster rate. Mivacurium has a short half-life and rapid clearance due to a rapid enzymatic hydrolysis by butyrylcholinesterase, leading to a relatively short duration of neuromuscular blockade . Since the duration of laser laryngeal microsurgery for vocal tumours typically ranges from 30 to 60 minutes, the rapid clearance of mivacurium provides the basis for the continuous infusion of mivacurium during the procedure. It was previously reported that the mean consumption of mivacurium for continuous infusion in adults to sustain neuromuscular blockade at 89–99% twitch suppression typically falls in the range of 6–7 g kg –1 min –1 without employing a control system . Janda et al . observed that a mivacurium dosage of 4.25 ± 1.25 μg kg –1 min –1 was sufficient to sustain a 90% neuromuscular blockade . Conversely, Kansanaho et al . demonstrated that an average mivacurium consumption of 7.5 ± 3.1 μg kg –1 min –1 was associated with 95% neuromuscular blockage while Schumacher et al. reported that a consumption rate of 7.0 ± 2.2 μg kg –1 min –1 was linked to a neuromuscular blockage of 90% , which might potentially be influenced by variations in patients and surgical procedures. In order to ensure deep muscle relaxa-tion during laser-assisted laryngeal microsurgery, the maintenance dose of mivacurium in this study was set to maintain a 100% neuromuscular blockade with T1 at 1% for supplementary dosage, thus effectively preventing laryngeal reflex and autonomous vocal cord movement. When mivacurium was administered as a continuous infusion at a rate of 6 μg kg –1 min –1 , 2 patients experienced involuntary vocal cord movement during the procedure and 15 patients required additional mivacurium doses to maintain muscle relaxation, indicating insufficient neuromuscular blockade at this low dose, which might potentially impact the accuracy of laser surgery. In group C2, which received 7 μg kg –1 min –1 , 10 patients needed additional doses. In contrast, patients in group C3, who received 8 μg kg –1 min –1 , displayed no autonomous vocal cord movements during the surgery, required supplementary doses, and had no significant blood pressure or heart rate fluctuations, ultimately achieving optimal surgical conditions. In line with our results, Yongjie et al . applied a sequential method to establish the maximum safe dose of mivacurium for continuous infusion in thyroid surgery, which was found to be 8.94 μg kg –1 min –1 , with a 95% confidence interval ranging from 8.89 to 8.99 μg kg –1 min –1 . These findings suggest that precise mivacurium dosing is vital for achieving optimal conditions in laser surgery, with a dosage of 8 μg kg –1 min –1 achieving the most favourable outcomes. In the present study, individual stimulation current calibration was performed in order to ensure precise neuromuscular monitoring. A prior study demonstrated that maintaining the stimulation current at least 15 mA above the threshold resulted in variances of up to 10% in individual TOF ratios , indicating the importance of calibrating stimulation currents. In addition, it was found that the postoperative effects of neuromuscular blocking drugs affected male patients more significantly than female patients, further emphasizing the need for tailored calibration . Nonetheless, it remains unclear whether males have a higher incidence of complications related to neuromuscular blocking drugs compared to females . Taking individual differences into account, we conducted T1 calibration before anaesthesia induction for all participants. Our findings revealed no significant differences in the stimulation current values during T1 calibration between male and female patients. Furthermore, we did not observe a relationship between the RI and gender. When the RI exceeded 5.68 minutes, additional intraoperative doses of mivacurium were not warranted. A longer RI, indicating a slower recovery of neuromuscular function, suggests that mivacurium remains effective for an extended duration, reducing the need for supplementary doses during the surgery to maintain the desired level of muscle relaxation. It was demonstrated that the prolonged infusion of mivacurium did not lead to significant accumulative effects, and the recovery time was independent of the total administered dose . In contrast, our findings revealed a positive correlation between the RI and the total administered dose, signifying that the recovery time increased with higher doses of mivacurium. Various factors, such as genetic mutations in the butyrylcholinesterase gene, haemodilution, impaired hepatic function, albumin loss, malnutrition, pregnancy and oestrogenic effects, could prolong the effects of mivacurium by a few minutes to several hours [ – ]. Additionally, a previous investigation found that the cis–trans and trans–trans isomers were rapidly eliminated from the body within approximately 2 minutes, while the cis–cis isomer had a longer elimination time of about 52 minutes . An accumulation of the cis–cis isomer in the plasma could contribute to a prolonged recovery period and increase the likelihood of residual neuromuscular blockade after the surgery . There are several limitations in this study that should be acknowledged. While the objective of our study was to determine the optimal mivacurium infusion dosage for laryngomicroscopic surgery using a target-controlled infusion system, we did not conduct a comprehensive comparative analysis of mivacurium’s effects and recovery compared to other neuromuscular blocking agents, such as rocuronium and atracurium. In addition, 3 patients among those who received a continuous mivacurium infusion at 8 μg kg –1 min –1 required supplemental mivacurium doses to maintain deep muscle relaxation, emphasizing the importance of individualized monitoring and dosing strategies. Furthermore, this study was conducted at a single centre and involved a relatively small sample size. Further investigations with larger patient populations across many centres are needed in order to further explore the pharmacodynamics of mivacurium and corroborate the findings of the current paper. This study investigated the effects of different doses of mivacurium in patients undergoing laser-assisted laryngeal microsurgery for vocal cord tumours. A target-controlled infusion of mivacurium at a dose of 0.20 mg kg –1 achieved optimal intubation conditions during anaesthesia induction while minimizing allergic reactions. Such allergic reactions, particularly histamine release, can pose significant challenges in maintaining stable haemodynamics during induction. Therefore, the controlled and gradual administration of mivacurium emerges as a safer and more efficacious approach. Moreover, the continuous infusion of mivacurium at a rate of 8 μg kg –1 min –1 stood out as the most suitable maintenance strategy, ensuring a favourable surgical environment by preventing involuntary vocal cord movement, bronchospasm, or significant haemodynamic fluctuations, and minimizing the need for supplementary doses. This dosage regimen has the potential to enhance patient safety and ensure smoother surgical procedures, particularly in delicate settings such as laryngeal microsurgery. In addition, this study provided insights into factors influencing the RI, including no-response time and the total dose of mivacurium, and revealed a positive correlation between the RI and the total administered dose of mivacurium. An increase in the total drug dosage during continuous mivacurium infusion might lead to delayed recovery of muscle relaxation and residual neuromuscular blockade and thus necessitates careful management. These findings might enable anaesthesiologists to make informed decisions regarding mivacurium dosage and better manage patients undergoing laryngeal microsurgery. Overall, the results underscore the significance of tailored approaches to neuromuscular blockade management and contribute to our understanding of mivacurium’s pharmacodynamics, helping to enhance patient safety and surgical outcomes. Further research and clinical applications of these findings hold the potential to improve the quality of care in anaesthesia and laryngeal microsurgery.
Joint analysis of time-varying effect of vaccine and antiviral drug for preventing severe complications and mortality
0489a985-fefe-458e-afd7-7370f0bf2cf8
11830013
Vaccination[mh]
The COVID-19 pandemic, caused by the SARS-CoV-2 virus, has posed unprecedented challenges to healthcare systems globally. As of February 20th, 2024, there have been over 774 million reported COVID-19 cases and an estimated 7 million recorded deaths worldwide. However, the actual numbers of infections and fatalities are likely to be much higher than the official figures – . In Hong Kong, the pandemic has been characterized by six distinct waves of infection. Hong Kong experienced relatively low SARS-CoV-2 circulation until a major community outbreak of the Omicron (B.1.1.529) sublineage BA.2.2 began in January 2022, triggering the fifth wave. Subsequently, the more transmissible BA.4/5 variant emerged in June 2022 and became the predominant strain by August 2022, resulting in the sixth wave of the pandemic – . From December 31, 2021, to January 1, 2023, the fifth and sixth waves had resulted in a reported 2,863,475 confirmed COVID-19 cases. This evolving public health challenge has spurred the rapid development of vaccines and the repurposing of existing medications to treat COVID-19 infections. Oral antiviral agents have emerged as crucial tools in reducing the incidence of severe complication, slowing down disease progression, and decreasing mortality rates in hospitalized patients . Oral antivirals (OAVs), such as molnupiravir (Lagevrio) and nirmatrelvir plus ritonavir (Paxlovid), have gained emergency use authorization from the US Food and Drug Administration for treating non-hospitalized, mild-to-moderate COVID-19 adult patients at risk of progressing to severe disease. These OAVs have been extensively studied in randomized controlled trials and real-world studies, consistently demonstrating their effectiveness in reducing hospitalizations and mortality rates. Their approval provides a novel and promising treatment option for adult patients with COVID-19 – . Nirmatrelvir-ritonavir was delivered to Hong Kong on March 14, 2022, while molnupiravir arrived in Hong Kong a few days earlier. The Hospital Authority began distributing these antiviral drugs to hospitals on the day after their arrivals – . Doctors will assess and make clinical judgments based on each patient’s condition before prescribing the appropriate oral antiviral drugs, aiming to reduce the risk of deterioration and minimize the likelihood of adverse reactions following treatment. Apart from oral antivirals, vaccination plays a crucial role in combating the ongoing global pandemic. In Hong Kong, the COVID-19 Vaccination Programme was launched on February 23, 2021. Two types of vaccines were used in Hong Kong, namely, Comirnaty (BNT162b2), an mRNA-based vaccine, and CoronaVac (Sinovac), an inactivated virus vaccine. The Comirnaty vaccine has been found to lower the hospitalization rate and reduce the frequency of severe outcomes due to the COVID-19 virus – . Both vaccines have shown to be effective in reducing the chance of developing severe diseases , .. Although vaccines and antiviral drugs have been used to reduce numbers of infections, complications and deaths, the virus has exhibited rapid evolution into new variants and subvariants. Decline in vaccine protection has been observed in various scenarios, especially against infection . Observational evidence suggests that vaccines offer robust and long-lasting protection against severe complication and death – . Some studies have focused on examining the combined impact of vaccines and antiviral drugs, but they have not taken into account the time-varying effect of these interventions – . Therefore, the objective of this study was to investigate the time-varying effect of vaccines and antiviral drugs in preventing severe complications and death. Study design and study population In this territory-wide study, we utilized electronic medical records from the Clinical Management System of Hong Kong Hospital Authority, vaccination records (including vaccination types and dates) from the Hong Kong Department of Health, and COVID-19 confirmed case records from the Hong Kong Centre for Health Protection. The datasets were match-merged using unique pseudo identifiers. The electronic medical records contained important variables such as age, gender, history of hospital admission, chronic diseases, and other relevant information. By combining and analyzing these datasets, we aimed to gain comprehensive insights into the relationship between vaccination status, medical history, and COVID-19 outcomes in Hong Kong. The study was conducted in accordance with the Declaration of Helsinki. The need to obtain informed consent for this retrospective study was waived by the Institutional Review Board of the University of Hong Kong/Hospital Authority Hong Kong West Cluster (HKU/HA HKW IRB) (Reference No.: UW 20–341 and date of approval: 17/2/2022). Outcomes and follow-up period The main objective of this study was to examine the time-varying protective efficacy (PE) of vaccines and antiviral drugs on complications or death among hospitalized COVID-19 patients. Complications were defined as the progression of the disease into a serious, critical, or fatal case. This definition is based on a combination of factors, including mortality, the need for oxygen supplementation at a rate of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ge$$\end{document} 3 L per minute, admission to the intensive care unit, intubation, the requirement for extracorporeal membrane oxygenation, or experiencing shock . Our study population comprised of patients who were admitted to the hospital between March 15, 2022, and November 30, 2022, and had a confirmed diagnosis of SARS-CoV-2 infection. This timeframe aligns with the introduction of antiviral drugs in Hong Kong , . Notably, we did not take into account the administration of antivirals prior to hospitalization. It is important to note that we assumed individuals would not progress to severe cases or death within two weeks of their last vaccination. The follow-up period for each hospitalized patient in the study began on the date of confirmed diagnosis or two weeks after receiving vaccination, and it continued until the time to occurrence of the primary outcome of interest, discharge from hospital or end of the observation period. Explanatory variables The risk set was organized and analyzed based on calendar days, which helped to address the concern of immortal time bias. We also took into account the changes over time in key variables, such as the status of oral antiviral treatment and the duration since the last vaccination. Specifically, if an oral antiviral was prescribed, we further distinguished whether the prescription was given within 5 days of confirmed diagnosis or not. Regarding vaccination, we included one-dose, two-dose, three-dose, and four-dose regimens as exposures of interest. We also examined different vaccine types, specifically Comirnaty and CoronaVac. Alongside these variables, we included other predictors in our analysis, namely age, gender and Charlson Comorbidity Index (CCI). Statistical analysis We adopt the Andersen-Gill model, which is an extension of the Cox model for analysis of recurrent events – . Let day 1 be March 15, 2022, and so on. The hazard function of the outcome event for subject \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i$$\end{document} on day \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t$$\end{document} then takes the form \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\lambda }_{i}\left(t\right)={\lambda }_{0}\left(t\right)\text{exp}\left[{\boldsymbol{\alpha }}^{T}{{\varvec{x}}}_{i}\left(t\right)-{z}_{i}\left(t\right){V}_{i}\left(t\right)+{w}_{i}(t){O}_{i}\left(t\right)\right]\text{ for }t=\text{1,2},\dots ,260,$$\end{document} where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\lambda }_{0}(t)\ge 0$$\end{document} is an unspecified baseline hazard function, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\varvec{x}}}_{i}\left(t\right)$$\end{document} is a vector of covariates that includes age, gender, CCI with the corresponding vector of regression coefficients \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\boldsymbol{\alpha }$$\end{document} , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${z}_{i}(t)$$\end{document} is the vaccination status with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${z}_{i}\left(t\right)=0$$\end{document} if the subject had not received any vaccination before day \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t$$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${z}_{i}\left(t\right)=1$$\end{document} otherwise, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${V}_{i}\left(t\right)$$\end{document} is the function of the time-varying vaccination effect which is expected to wane over time, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${w}_{i}(t)$$\end{document} is the oral antiviral prescription status with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${w}_{i}(t)=0$$\end{document} if the subject had not been prescribed an antiviral drug on or before day \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t$$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${w}_{i}(t)=1$$\end{document} otherwise, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${O}_{i}\left(t\right)$$\end{document} characterizes the time-varying effect of the OVA which is further elaborated below. Let \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d$$\end{document} be the day of receiving the last vaccination. The time-varying vaccination effect \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$V(t)$$\end{document} is estimated separately for different number of doses received on day t using a modified exponential decay function given by \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lim_{a \rightarrow b}{V\left(t\right)=A{e}^{-B{(t-d-14)}^{C}},\text{ for }A,B,C>0,}$$\end{document} where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left(t-d-14\right)$$\end{document} is based on the assumption that it takes 14 days for the protective effects of the vaccination to reach its peak , . This function has the advantage that the three parameters A , B and C , are interpretable, representing the dose effect immediately after each vaccination, the rate of waning and shape of trajectory, respectively. Since we do not have the exact or approximate time required for the antiviral drugs to take effect, we use the 4-parameter pharmacokinetic/pharmacodynamic (PK/PD) function to estimate the trajectory of the efficacy. Not only does this approach provide biomedical interpretation, but it also enables a non-monotonic, non-linear estimation of the time-varying effects and the estimation of the peak level of PE . Let \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p$$\end{document} be the day of receiving the antiviral drug treatment. The time-varying antiviral drug effect on day \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t$$\end{document} using a PK/PD function can be expressed as: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O\left( t \right) = \gamma \ln \left( \beta \right) - \ln \left\{ {\beta^{\gamma } + \left[ {\frac{\kappa }{\kappa - 1}\left( {e^{{ - \left( {t - p} \right)}} - e^{{ - \kappa \left( {t - p} \right)}} } \right)} \right]^{\gamma } } \right\} + \delta \left( {1 - e^{{ - \kappa \left( {t - p} \right)}} } \right), \mbox{for} \ \kappa ,\gamma ,\beta> 0,\kappa \ne 1.$$\end{document} A large \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document} value indicates a lower level of protection, a large \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\kappa$$\end{document} value indicates a quicker rise of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{\kappa }{\kappa -1}\left({e}^{-(t-p)}-{e}^{-\kappa (t-p)}\right)$$\end{document} and therefore \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O\left(t\right)$$\end{document} , and a large \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\upgamma$$\end{document} value indicates a more sigmoid shape of the trajectory. A large \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta$$\end{document} value with a negative sign indicates a robust, long-lasting protective effect over time, while a large positive \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta$$\end{document} value indicates a strong rebound effect. The PE of vaccine can be expressed as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$P{E}_{V}\left(t\right)=1-\text{exp}\left[-V\left(t\right)\right],$$\end{document} and that of the oral antiviral drug as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$P{E}_{O}\left(t\right)=1-\text{exp}\left[O\left(t\right)\right].$$\end{document} The estimates for the parameters are obtained by maximizing the Breslow-type partial likelihood function. In this territory-wide study, we utilized electronic medical records from the Clinical Management System of Hong Kong Hospital Authority, vaccination records (including vaccination types and dates) from the Hong Kong Department of Health, and COVID-19 confirmed case records from the Hong Kong Centre for Health Protection. The datasets were match-merged using unique pseudo identifiers. The electronic medical records contained important variables such as age, gender, history of hospital admission, chronic diseases, and other relevant information. By combining and analyzing these datasets, we aimed to gain comprehensive insights into the relationship between vaccination status, medical history, and COVID-19 outcomes in Hong Kong. The study was conducted in accordance with the Declaration of Helsinki. The need to obtain informed consent for this retrospective study was waived by the Institutional Review Board of the University of Hong Kong/Hospital Authority Hong Kong West Cluster (HKU/HA HKW IRB) (Reference No.: UW 20–341 and date of approval: 17/2/2022). The main objective of this study was to examine the time-varying protective efficacy (PE) of vaccines and antiviral drugs on complications or death among hospitalized COVID-19 patients. Complications were defined as the progression of the disease into a serious, critical, or fatal case. This definition is based on a combination of factors, including mortality, the need for oxygen supplementation at a rate of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ge$$\end{document} 3 L per minute, admission to the intensive care unit, intubation, the requirement for extracorporeal membrane oxygenation, or experiencing shock . Our study population comprised of patients who were admitted to the hospital between March 15, 2022, and November 30, 2022, and had a confirmed diagnosis of SARS-CoV-2 infection. This timeframe aligns with the introduction of antiviral drugs in Hong Kong , . Notably, we did not take into account the administration of antivirals prior to hospitalization. It is important to note that we assumed individuals would not progress to severe cases or death within two weeks of their last vaccination. The follow-up period for each hospitalized patient in the study began on the date of confirmed diagnosis or two weeks after receiving vaccination, and it continued until the time to occurrence of the primary outcome of interest, discharge from hospital or end of the observation period. The risk set was organized and analyzed based on calendar days, which helped to address the concern of immortal time bias. We also took into account the changes over time in key variables, such as the status of oral antiviral treatment and the duration since the last vaccination. Specifically, if an oral antiviral was prescribed, we further distinguished whether the prescription was given within 5 days of confirmed diagnosis or not. Regarding vaccination, we included one-dose, two-dose, three-dose, and four-dose regimens as exposures of interest. We also examined different vaccine types, specifically Comirnaty and CoronaVac. Alongside these variables, we included other predictors in our analysis, namely age, gender and Charlson Comorbidity Index (CCI). We adopt the Andersen-Gill model, which is an extension of the Cox model for analysis of recurrent events – . Let day 1 be March 15, 2022, and so on. The hazard function of the outcome event for subject \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i$$\end{document} on day \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t$$\end{document} then takes the form \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\lambda }_{i}\left(t\right)={\lambda }_{0}\left(t\right)\text{exp}\left[{\boldsymbol{\alpha }}^{T}{{\varvec{x}}}_{i}\left(t\right)-{z}_{i}\left(t\right){V}_{i}\left(t\right)+{w}_{i}(t){O}_{i}\left(t\right)\right]\text{ for }t=\text{1,2},\dots ,260,$$\end{document} where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\lambda }_{0}(t)\ge 0$$\end{document} is an unspecified baseline hazard function, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\varvec{x}}}_{i}\left(t\right)$$\end{document} is a vector of covariates that includes age, gender, CCI with the corresponding vector of regression coefficients \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\boldsymbol{\alpha }$$\end{document} , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${z}_{i}(t)$$\end{document} is the vaccination status with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${z}_{i}\left(t\right)=0$$\end{document} if the subject had not received any vaccination before day \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t$$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${z}_{i}\left(t\right)=1$$\end{document} otherwise, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${V}_{i}\left(t\right)$$\end{document} is the function of the time-varying vaccination effect which is expected to wane over time, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${w}_{i}(t)$$\end{document} is the oral antiviral prescription status with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${w}_{i}(t)=0$$\end{document} if the subject had not been prescribed an antiviral drug on or before day \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t$$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${w}_{i}(t)=1$$\end{document} otherwise, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${O}_{i}\left(t\right)$$\end{document} characterizes the time-varying effect of the OVA which is further elaborated below. Let \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d$$\end{document} be the day of receiving the last vaccination. The time-varying vaccination effect \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$V(t)$$\end{document} is estimated separately for different number of doses received on day t using a modified exponential decay function given by \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lim_{a \rightarrow b}{V\left(t\right)=A{e}^{-B{(t-d-14)}^{C}},\text{ for }A,B,C>0,}$$\end{document} where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left(t-d-14\right)$$\end{document} is based on the assumption that it takes 14 days for the protective effects of the vaccination to reach its peak , . This function has the advantage that the three parameters A , B and C , are interpretable, representing the dose effect immediately after each vaccination, the rate of waning and shape of trajectory, respectively. Since we do not have the exact or approximate time required for the antiviral drugs to take effect, we use the 4-parameter pharmacokinetic/pharmacodynamic (PK/PD) function to estimate the trajectory of the efficacy. Not only does this approach provide biomedical interpretation, but it also enables a non-monotonic, non-linear estimation of the time-varying effects and the estimation of the peak level of PE . Let \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p$$\end{document} be the day of receiving the antiviral drug treatment. The time-varying antiviral drug effect on day \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t$$\end{document} using a PK/PD function can be expressed as: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O\left( t \right) = \gamma \ln \left( \beta \right) - \ln \left\{ {\beta^{\gamma } + \left[ {\frac{\kappa }{\kappa - 1}\left( {e^{{ - \left( {t - p} \right)}} - e^{{ - \kappa \left( {t - p} \right)}} } \right)} \right]^{\gamma } } \right\} + \delta \left( {1 - e^{{ - \kappa \left( {t - p} \right)}} } \right), \mbox{for} \ \kappa ,\gamma ,\beta> 0,\kappa \ne 1.$$\end{document} A large \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document} value indicates a lower level of protection, a large \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\kappa$$\end{document} value indicates a quicker rise of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{\kappa }{\kappa -1}\left({e}^{-(t-p)}-{e}^{-\kappa (t-p)}\right)$$\end{document} and therefore \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O\left(t\right)$$\end{document} , and a large \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\upgamma$$\end{document} value indicates a more sigmoid shape of the trajectory. A large \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta$$\end{document} value with a negative sign indicates a robust, long-lasting protective effect over time, while a large positive \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta$$\end{document} value indicates a strong rebound effect. The PE of vaccine can be expressed as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$P{E}_{V}\left(t\right)=1-\text{exp}\left[-V\left(t\right)\right],$$\end{document} and that of the oral antiviral drug as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$P{E}_{O}\left(t\right)=1-\text{exp}\left[O\left(t\right)\right].$$\end{document} The estimates for the parameters are obtained by maximizing the Breslow-type partial likelihood function. We identified a total of 48,984 hospitalized patients with confirmed diagnoses of SARS-CoV-2 infections between March 15, 2022, and November 30, 2022. Among these patients, 38,290 individuals met the inclusion criteria, of which 9,512 and 10,851 received molnupiravir and nirmatrelvir-ritonavir, respectively (Fig. ). Among the 20,363 patients who received oral antiviral treatments, 8,608 were 80 years or above, 8,972 between 60 and 79, and 2,783 below 60 and above 18 (Supplementary Table 1). In this study, 9,661 patients had not taken any vaccination. 549, 2,712, 5,044, and 407 patients received one, two, three, and four doses of Comirnaty, respectively, and 3,358, 6,531, 9,033, and 968 patients, respectively for CoronaVac. The daily numbers of severe complication cases and death cases are shown in Supplementary Fig. 1. We categorize the patients into three age groups, namely 18–59, 60–79, and 80 or above, with 18–59 set as the reference group. The Charlson Comorbidity Index (CCI) is categorized into 4 groups, namely 0, 1–4, 5–6, and 7–14, with 0 being the reference group. The estimation results for these time-constant variables are shown in Table . The time-varying effect of vaccines and oral antiviral drugs are plotted in Fig. and Fig. . For patients aged between 60 and 79, and those aged above 80, they share a relatively higher risk than adult patients aged below 60, with hazard ratios 2.292 (95% CI 1.954 to 2.689, p < 0.001) and 3.644 (95% CI 3.121 to 4.255, p < 0.001) against severe complications, respectively. The corresponding hazard ratios against death are 2.659 (95% CI 2.187 to 3.233, p < 0.001) and 4.941 (95% CI 4.087 to 5.973, p < 0.001). In both severe complications and death, male patients have a significantly higher risk compared to female patients, with respective hazard ratios 1.220 (95% CI 1.138 to 1.308, p < 0.001) and 1.192 (95% CI 1.108 to 1.283, p < 0.001). The CCI index is the most widely used scoring system for comorbidities. Compared to patients with a CCI index of zero, the hazard ratios against severe complications are 1.517 (95% CI 1.410 to 1.633, p < 0.001), 2.397 (95% CI 1.910 to 3.007, p < 0.001), and 2.398 (95% CI 2.028 to 2.836, p < 0.001) for CCI index values of 1–4, 5–6, and 7–14, respectively. Compared to patients with a CCI index of zero, the hazard ratios against death are 1.588 (95% CI 1.469 to 1.717, p < 0.001), 2.778 (95% CI 2.214 to 3.486, p < 0.001), and 2.843 (95% CI 2.404 to 3.362, p < 0.001) for CCI index values of 1–4, 5–6, and 7–14, respectively. Although the time-constant model can provide good estimation of the vaccination effects, a generalized likelihood ratio test indicates that the time-varying model offers a significantly better fit (with p < 0.001) to the data compared to the time-constant model, regardless of whether we are studying complications or mortality. From the plots of the estimated vaccine PE trajectory in Fig. , it is clear that receiving more doses of vaccination was associated with a reduced risk of complications and death overall. Moreover, the PEs are rather stable over time suggesting that the vaccines offer robust and long-lasting protection against severe complications and death. Waning effects of the two types of vaccines become apparent only in the fourth dose. We fit the model again by assuming no waning vaccine effect that the PE is constant over time to compare the two sets of estimates. The results are summarized in Table where we report the peak PE level at \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t=0.5$$\end{document} month and the PE level at 6 months for each vaccine type and number of doses. The two sets of PEs are very similar in the cases where the number of doses is 1, 2, or 3, which is expected because there was little waning. For the case with 4 doses, the estimated time-constant PE can be seen as the average of the PEs over time. Therefore, both time-varying and time-constant vaccine effect approaches are reasonable in this application when evaluating the protective efficacy of vaccines against complications or death, but the time-varying approach would be more informative in guiding informed decision. The estimated trajectory presented in the left panel of Fig. reveals the time-varying effects of antiviral drugs taken within 5 days of confirmed COVID-19 diagnosis. For this group of patients, the protective efficacy against complications reaches a maximum of 0.438 and 0.634, respectively for molnupiravir and nirmatrelvir-ritonavir at around 5 days since prescription of the antiviral drugs. However, the efficacy then decreases to 0.047 for molnupiravir and 0.489 for nirmatrelvir-ritonavir after 28 days. Outside the 5-day window, the time-varying protective efficacy remains stable at 0 for both drugs. Regarding the time-varying effects against death, for patients who took antiviral drugs within 5 days of confirmed COVID-19 diagnosis, the PE for molnupiravir reaches a maximum of 0.657 after 1 day and then drops to −0.103 after 28 days. For nirmatrelvir-ritonavir, the maximum protection is achieved at day 3 since prescription with a maximum PE of 0.778, followed by a decline to 0.313 after 28 days. Outside the 5-day window, the PE remains at 0 for molnupiravir, while for patients who took nirmatrelvir-ritonavir, the PE increases to 0.140 one day after prescription and then decreases to −0.789 after 28 days. When assuming time-constant effect for oral antivirals, the PE against complications is 0.540 (95% CI 0.482 to 0.591, p < 0.001) and −0.032 (95% CI −0.718 to 0.380, p = 0.904) for nirmatrelvir-ritonavir, within and outside 5 days respectively, and 0.249 (95% CI 0.186 to 0.308, p < 0.001) and −0.014 (95% CI −0.346 to 0.236, p = 0.924) for molnupiravir, within and outside 5 days respectively. For the PE against death, nirmatrelvir-ritonavir provides 0.622 (95% CI 0.566 to 0.670, p < 0.001) and −0.508 (95% CI −1.300 to 0.012, p = 0.057) within and outside 5 days, respectively, and molnupiravir provides 0.313 (95% CI 0.251 to 0.369, p < 0.001) and 0.038 (95% CI −0.277 to 0.276, p = 0.787) within and outside 5 days, respectively. We summarize the results based on the time-varying and time-constant model in Table . We report the maximum PE level and the time to reach the peak as well as the PE level at \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t=28$$\end{document} day based on the time-varying model. It is easy to see that the time-constant PE estimate can be regarded as the average of the time-varying PE over time. These findings emphasize the dynamic nature of the protective effects of the oral antiviral drugs over time. The observed variations in PE within 28 days after prescription underscore the importance of considering the temporal aspects of pharmacodynamic relationships when evaluating the potential impact of these interventions on clinical outcomes. The findings of this study contribute to a deeper understanding of the time-varying effects of vaccines and oral antiviral drugs in preventing severe complications and mortality associated with COVID-19 infections. The analysis revealed that both interventions exhibited dynamic PE over time, underscoring the critical importance of considering the temporal aspects of these interventions in mitigating the impact of the pandemic. The results demonstrate that multiple doses of CoronaVac and Comirnaty vaccines provide substantial protection against severe outcomes and death, suggesting potential benefits of additional boosters for certain vaccine types. However, an exponential decay in efficacy was observed within six months of the fourth dose for both vaccines, indicating a possible need for periodic boosters to maintain optimal protection. The analysis of the oral antiviral drugs, molnupiravir and nirmatrelvir-ritonavir, revealed distinct time-varying patterns in their PE. Taking the drugs within 5 days of confirmed COVID-19 diagnosis was found to be more effective than outside this 5-day window. For patients receiving the antiviral drugs within 5 days of confirmed diagnosis, both drugs exhibited maximum protection within the first 5 days since prescription, followed by a gradual decline in efficacy over the subsequent 28 days. In contrast, low or even negative effects would occur for patients who took the antiviral drugs outside the 5-day window of confirmed diagnosis. This finding underscores the critical importance of early intervention with these therapies to optimize their potential benefits in preventing severe COVID-19 outcomes. The observed variations in PE over time highlight the necessity to consider the temporal dynamics of pharmacological interventions when evaluating their impact on clinical outcomes. The results of this study have important implications for public health policies and clinical practice. The observed time-varying effects of vaccines and antiviral drugs underscore the need for tailored vaccination schedules and treatment protocols. Regular monitoring and adaptive vaccination strategies may be necessary to maintain optimal protection levels and mitigate the impact of emerging SARS-CoV-2 variants. For oral antivirals it is well established that their effectiveness will be highest if given early in the course of disease .. The strengths of this study include its use of a comprehensive, territory-wide dataset capturing real-world COVID-19 management in Hong Kong. The ability to link vaccination, oral antiviral usage, and clinical outcome data allows for a comprehensive evaluation of the synergistic effects of these interventions. However, the observational nature of the data may be subject to residual confounding, and the findings may be specific to Hong Kong’s epidemiological context and predominant viral lineages during the study period. Further research in diverse settings would be valuable to confirm the generalizability of these results. Additionally, when considering age and the CCI, different categorization standards may be adopted, while smoothing splines could be an alternative that however would add further complexity to the model. CCI may only partially capture the risk of severe SARS-CoV-2 infections. In conclusion, this study provides crucial insights into the time-varying effects of vaccines and oral antiviral drugs in preventing severe COVID-19 outcomes in Hong Kong. The findings highlight the dynamic nature of these interventions and the importance of considering temporal aspects in their implementation and evaluation. Continued research and adaptation of public health strategies will be crucial in the ongoing battle against the COVID-19 pandemic. Supplementary Information.
Development and validation of a questionnaire on parental health literacy in the context of promoting healthy lifestyles during childhood: a study protocol
830cba11-cdf7-493e-bd89-c118573125a6
11822441
Health Literacy[mh]
As primary caregivers, parents are not only responsible for their own health but also the health and well-being of their children. This makes them a particularly relevant target group for public health research and practice. Parents support their children and create daily routines on, for example, eating, playing and sleeping over a long period of time. Therefore, they have a significant influence on their children’s healthy behaviours (eg, eating habits, physical activity, media use and sleep patterns). Parents can significantly impact their children’s lifestyle by adopting positive behaviours, imparting healthful routines and creating environments that promote well-being. Practical strategies to support this include offering a balanced diet, reducing fast food and sugary intake, promoting family meals free from digital distractions, involving children in meal preparations, encouraging physical activity, limiting screen time and ensuring consistent bedtime routines. However, developing and maintaining a healthy lifestyle in childhood require parents to possess the necessary knowledge and skills to make appropriate decisions for their children. Health-related information is a key resource in this regard, and in today‘s information age parents actively seek and passively receive a great deal of such information in both digital and analogue form. This requires parents to find, understand and critically evaluate health-related information and use it in everyday family life with their children. This ability is also described as parental health literacy (PHL). While research on general health literacy (HL) has a longer history, the specific focus on PHL in the context of child health is still relatively underdeveloped. Nevertheless, there is a growing body of evidence indicating that low PHL is associated with poorer health outcomes in children, including unhealthy eating habits, higher obesity rates, more medication errors and more frequent visits to emergency departments. Specifically, the growing prevalence of overweight and obesity in children underscores the urgent need to cultivate healthy lifestyles from an early age to prevent adult secondary diseases. In 2022, Pawellek and Brandstetter undertook a comprehensive scoping review to explore the associations between PHL and health behaviours (HB) directed at the child, including aspects such as injury prevention, dietary behaviour, medication use and utilisation of healthcare services alongside an examination of their statistical modelling. The authors found that most studies assume a direct relationship between PHL and child-directed HB. This emphasises the importance of considering PHL as a key determinant in improving children’s health. Moreover, the review points out the considerable variability in PHL measurements across studies, with some relying on validated generic tools such as The newest vital sign, the rapid estimate of adult literacy in medicine and the test of functional health literacy in adults (TOFHLA), while others use unvalidated instruments with self-developed items. Measuring PHL has been a persistent challenge, as demonstrated by DeWalt and Hink’s systematic review. The review found that reading literacy is often used as a proxy for PHL. This challenge was echoed by Sanders et al , who criticised the inadequacy of instruments focusing on reading skills and advocated for a broader conceptualisation of PHL to address the health needs of children more effectively. The widespread variability in PHL measurement methods underlines the critical need for developing standardised, theoretically informed models and reliable tools to assess PHL. To achieve this, it is crucial to first clarify the unique characteristics of PHL. Compared with the general understanding of HL, parental handling of health information differs in that it pertains to their children’s health. This process involves a proxy character, as parents must act on behalf of their child, whose health needs they must interpret, articulate and address. This adds a layer of complexity, as parents are not only required to understand health information but also to apply it in the context of another individual’s needs. Initial research on this topic shows that the search for information, its application and the interaction with medical service providers have a surrogate character. Kubb and Foran introduce the term proxy health communication in their work, which refers to any behaviour intended to seek information for the purpose of obtaining support or advice on a health-related topic for another person. Research on the information-seeking behaviour of parents shows that, in addition to the very frequently used digital media, personal contacts are still the most common source of health information for parents with young children, regardless of the child’s health status. Parents use a variety of sources, such as online and print media, as well as their personal network, to enhance their understanding and verify the accuracy of information regarding their child’s health. Therefore, it is crucial for parents to discuss their child’s health concerns with health professionals, family members, caregivers, friends and acquaintances. As these are essential aspects of HL, these unique characteristics should be taken into account when considering and researching PHL. To gain a comprehensive and in-depth understanding of the skills that parents need when dealing with health information, a questionnaire that considers PHL’s particular characteristics is required. As indicated by the current research, most previous research on PHL has used questionnaires to measure generic HL. Thus, it does not sufficiently reflect the situational and specific characteristics of the parenting context. There are only a few international questionnaires specifically designed for assessing PHL. Some of these questionnaires are based on specific clinical conditions such as childhood epilepsy (HLQ-P) or early childhood allergy prevention (ECAP-HL). Other instruments, such as the Parental Health Literacy Activity Test (PHLAT) and the Parenting Plus Skills Index (PPSI) are performance- and knowledge-based tests that focus primarily on dimensions of functional literacy (eg, numeracy skills) and parents’ knowledge of health and clinical health issues. However, an in-depth empirical understanding of PHL requires a questionnaire that captures the entire multidimensionality of HL as defined by Sørensen et al . Zhang et al made this attempt by adapting the Health Literacy Questionnaire (HLQ) to measure generic HL for use with Chinese parents and validating the Chinese Parental Health Literacy Questionnaire (CPHLQ). However, this questionnaire is limited to Chinese parents with children between the ages of 0 and 3. As this overview shows, most of the specific instruments either have a clinical focus or primarily examine functional literacy aspects of PHL. However, there is currently no psychometrically sound questionnaire available in the broader field of prevention and health promotion. As a consequence, there is currently a lack of empirical data that provides information on the level of PHL within and across countries. Hence, it is vital to have a reliable and valid measurement of PHL to understand its relevance to children’s health and to develop tailored interventions. In response to previous research findings, this study aims to combine the central determinants of a healthy lifestyle in childhood (nutrition, physical activity, sleep and media consumption) with a holistic understanding of HL according to Sørensen et al . This requires further conceptual development of the PHL construct, which describes the ability of parents to find information about their children’s health and deal with it in a way that contributes to the development of a healthy lifestyle in their children. Additionally, it will expand to include the surrogate parental characteristics of information seeking and processing. Given that HL is a dynamic, context-specific construct, it is essential to consider a child’s age and developmental stage, such as whether they are an infant or toddler. This perspective offers valuable insights into the age and developmental challenges faced by parents. Focusing on children aged 3 to 6 years is particularly significant. Around 3 years, children enter the preschool phase, marked by rapid physical, cognitive and social development. During the second and third years of life, children begin to develop autonomy in health-related behaviours, such as eating independently and improving motor skills. This stage is critical for establishing foundational health behaviours that can persist into adulthood. At around the age of 6 years, the transition to school introduces new socialisation agents, such as teachers and peers, shifting the child’s environment and the role of parents in shaping health behaviours. These early milestones require parents to provide tailored support to foster healthy habits. This makes the preschool years vital for interventions aimed at promoting long-term and maintaining healthy habits. Moreover, the development of the PHL questionnaire is also part of the broader frühstArt project, which targets children aged 3 to 6 years with overweight and obesity. This initiative supports families in cultivating healthy lifestyle habits early, thereby encouraging a healthy weight trajectory. By strengthening family-based health-promoting factors and fostering collaboration across healthcare sectors, the project takes a multilevel approach to supporting healthy child development. Objective and research questions This study aims to develop and psychometrically evaluate a German-language questionnaire for assessing PHL with a focus on the promotion of a healthy lifestyle in children. To accomplish this, the study addresses two main research questions: How can the generic HL framework proposed by Sørensen et al be modified and extended to fit the specific context of parental information behaviours in the context of promoting healthy lifestyles during childhood? What psychometric quality (objectivity, validity and reliability) characterises the newly developed questionnaire? Patient and public involvement This study will actively engage parents in the research process. Their input and feedback are instrumental in shaping the development of the questionnaire and ensuring its relevance and applicability to real-world contexts. Parents will be engaged during the item pool development phase through cognitive interviews to ensure the questionnaire’s relevance and clarity. They will also provide feedback on the usability of the final instrument. Design A mixed-methods design will be used to develop and validate the questionnaire on PHL. The methodological approach consists of four steps and is based on the quality standards for the development of measurement questionnaires in social science survey research . Step 1: literature research and item pool development During the initial stage of questionnaire development, a thorough list of items is compiled based on a scoping review of current literature. Specifically, the international body of research with regard to the necessary skills of parents in dealing with health information related to a healthy lifestyle in childhood will be mapped. The literature search focuses on original research articles, validation studies, empirical research and concept papers published in German or English. This search is conducted across the CINAHL, EMBASE, PsycINFO and Web of Science databases to encompass a wide array of relevant academic contributions. Based on the empirical findings reviewed so far, it can be assumed that communication with healthcare providers, family members, other caregivers, friends or acquaintances is an important skill area for parents. Therefore, it should be included as a separate subscale in addition to the four dimensions (find, understand, assess and apply). It is planned to combine the central behavioural determinants of a healthy lifestyle in childhood—nutrition, physical activity, media use and sleep—with the dimensions of HL (find, understand, assess, apply, communicate) in an item matrix. This structure, as illustrated in , will serve as the foundation for assessing construct validity in subsequent analyses. Step 2: Delphi study with experts A Delphi study will be conducted in three rounds to test the content validity of the underlying item pool. This method will be used to refine and optimise the item pool. Delphi surveys generally have different types and application variations, with other characteristics suitable for various application areas. The core feature of the Delphi method is successive rounds in which participants with different professional backgrounds and expertise are asked to share their knowledge and opinions in a structured survey process to reach a consensus on defined questions in several rounds of revision. Compared with the traditional method, in which participants often remain anonymous, the Group Delphi enables a direct discursive exchange between the participants, which means that any dissent can be discussed and clarified quickly. The Delphi study planned here will consist of three rounds and contain elements of the classic Delphi method and the Group Delphi method . According to Niederberger and Renn, a Group Delphi should involve 12–25 experts. Therefore, the methodological approach aims to involve at least 12 experts. For this study, experts are defined as researchers who have been active in the field of HL, PHL or related topics for at least 3 years. Potential experts will be identified through a search and analysis of publications, conference proceedings and professional networks in the relevant fields. The identified experts will be contacted via personalised emails, which will include detailed information about the research project and the purpose of the Delphi study. Participation will be voluntary, and written consent will be obtained before commencing the study. The first round of the Delphi study consists of an online survey in which the participating experts can rate all items with regard to their relevance and clarity on a four-point Likert scale. For each item, experts should provide open-ended explanations for their answers and, if necessary, recommendations for further development. To measure content validity, experts assess the relevance on a scale of ‘very relevant’, ‘fairly relevant’, ‘not very relevant’ or ‘not relevant at all’. This rating is used to calculate the content validity index (CVI) at both the item (I-CVI) and scale (S-CVI) levels. The first step in calculating the I-CVI involves dichotomising the values into ‘relevant’ (1 point) and ‘not relevant’ (0 points). Next, the points for each item are summed up and divided by the number of experts. Lynn recommends an I-CVI of at least 0.78 for expert ratings greater than or equal to six. For items with an I-CVI above 0.78, open-ended responses are used to adjust the wording and are re-rated by the experts in the next round. Additionally, the Scale Content Validity Index/Average (S-CVI/Ave) calculation aims to assess the content validity of a subscale as a whole. As recommended by Ausserhofer et al and Polit et al , an S-CVI/Ave of 0.90 or higher is deemed acceptable. A value significantly below 0.90 indicates a need for subscale revision. In the subsequent analysis of this first survey round, closed answers will be evaluated quantitatively through descriptive analyses, while open-ended responses will be qualitatively assessed using content analysis, following Kuckartz’s methodology. The experts who participated in the first round will be invited to a 90-min online workshop in the second Delphi round. The main discussion points and arguments will be recorded during the online workshop. The workshop will be followed by a second evaluation phase in which the protocol and the visualised key discussion points will be evaluated using qualitative content analysis, according to Kuckartz. Based on the evaluation of the first Delphi round and the online workshop, the item battery will be revised accordingly. The expert panel will then re-evaluate the modified item pool in the third Delphi round. As with the first round, participants will receive an online questionnaire to evaluate the relevance and appropriateness of the item wording. These evaluations will inform the final specification and optimisation of the item pool. Step 3: cognitive interviews with parents After conducting the Delphi study, the refined item pool will undergo a validity analysis using cognitive interviews with parents. This step should answer the following subquestions : Which questions are not understood or are considered problematic? Are questions perceived as redundant by the respondents? Does the meaning of questions as perceived by respondents match the intended meaning? Are respondents using the full range of the response scale? Do previous questions affect the response to subsequent questions? To this end, a sample of 15–20 parents with preschool-age children will be recruited using the snowball method. The inclusion criteria for the planned interviews require participants to be at least 18 years old, have sufficient written and spoken proficiency in German and have children between the ages of 3 and 6 years. Heterogeneity in terms of age, gender and educational background will be considered in the recruitment process. An interview is expected to take 60–90 min and may be conducted online or in person. Parents’ preferences will be taken into account during the interview process. The interview methods of think-aloud and probing will be used. In the think-aloud method, interviewees are instructed to vocalise any thoughts that come to mind while answering a question. The probing process involves asking targeted questions about the interviewee’s approach to answering the question. The protocol will include probes based on Tourangeau’s four-stage model: question comprehension, memory, judgement and answer selection. In addition to these pre-formulated probes, proactive and reactive probes are formulated depending on the respondents' statements. The cognitive interviews are audio-recorded to ensure focused interviewing and later transcription. The transcribed interview material is evaluated using qualitative content analysis, following Kuckartz’s method. The results are used to further modify the item pool. Additionally, a standardised data analysis sheet will be created. This will allow a structured evaluation by creating a case-specific list of all statements made by the interviewees, as suggested by Prüfer and Rexroth. Step 4: validation study The validation study will be conducted as an online-based cross-sectional survey using convenience sampling and will have the same inclusion criteria as the cognitive interviews. Recruitment efforts will focus on reaching a diverse sample of German-speaking parents with regard to age, gender and educational background. Potential avenues include daycare centres, parenting groups and organisations that serve families across a broad range of socioeconomic backgrounds. Particular attention will be given to reaching parents from different educational and socioeconomically backgrounds, ensuring varied representation. These efforts will help achieve a sociodemographically diverse sample reflective of the target population. There are varying recommendations for determining the appropriate sample size. The suggested ratio of subjects to items ranges from 3:1 to 20:1. This study follows the recommendation of Nunnally and Bernstein that the ratio should not be less than 10:1. As the number of items has not yet been finalised, the sampling, including the exact number of cases, can only be planned later. Item analysis Item analysis will be conducted to assess item quality using parameters such as item difficulty and selectivity. Descriptive statistics (mean, SD and missing values) will be presented for each item. The distribution will be tested for normality using the Kolmogorov-Smirnov test, alongside visual inspections using Q–Q and P–P plots. Item difficulty will be assessed for potential floor or ceiling effects, aiming for indices between 0.20 and 0.80. Discriminatory power will be calculated via correlation, and items with low discriminatory power or common variance below 10% will be reviewed for possible exclusion. Assessment of construct validity To assess construct validity, we hypothesise that the five HL dimensions (find, understand, assess, apply and communicate) represent independent but interrelated constructs. Similarly, the four health topics (nutrition, physical activity, media use and sleep) are expected to form distinct yet overlapping domains. Exploratory and confirmatory factor analyses will be conducted to test these assumptions. We will examine cross-loadings to identify potential overlaps between dimensions and domains and expect strong within-dimension correlations for items assessing the same HL domain across different health topics. A dimensionality check is crucial for ensuring the test’s dimensions align with the concept definition. Correlation analyses, such as exploratory or confirmatory factor analyses, are used to assign items to corresponding subtests or subdimensions. A recommended approach involves a dual analysis: initially employing principal component analysis (PCA) followed by confirmatory factor analysis (CFA) on two separate parts of the sample with equal proportions. Commencing with PCA helps identify the test’s underlying dimensions by establishing the number and importance of factors. Essential preliminary steps include calculating the Kaiser-Meyer-Olkin criterion to assess data suitability for PCA, with a minimum value of 0.5 recommended, and applying the Bartlett test to examine sphericity. Subsequent factor determination can draw on criteria like eigenvalues greater than one, scree plot or parallel analysis. The factor loadings are determined, and the items are then assigned to their corresponding components. In analysing the factor loading matrix, a factor loading greater than 0.5 is a fundamental requirement for the assignment. Additionally, when a variable loads sufficiently on multiple factors, it is important to consider the geometric aspects of the factors. We will use the Varimax rotation method to simplify the factor structure. Once the components are interpreted, they are tested for their measurement accuracy (reliability). The most well-known measure for determining reliability is Cronbach’s alpha. To ensure adequate reliability and meaningful factor loadings, each subscale (eg, HL dimension per health topic) will include a minimum of three items, aligning with recommendations for robust psychometric scales. In the subsequent phase, the factor structure identified by PCA is verified using CFA with the alternate half of the sample. An iterative estimation procedure is used to estimate the parameters, such as the maximum likelihood method or the generalised least square method. In the exploration of CFA, a crucial aspect of assessing the quality of measurement models lies in examining both local and global fit indices. Local fit indices, including factor reliability (FR), average variance extracted (AVE) and the Fornell-Larcker Criterion, provide insights into the internal consistency of the model, the amount of variance captured by the construct and the discriminant validity between constructs, respectively. These local fit indices are vital for a nuanced understanding of the model’s internal structure. It is essential to interpret these local measures alongside global fit indices such as χ 2 , χ 2 /df, goodness of fit, standardised root mean residual and root mean square error of approximation to achieve a holistic understanding of the model’s fit. Each set of measures illuminates different facets of the model’s quality, ensuring a robust evaluation of its validity and reliability. Deriving and testing hypotheses concerning the relationships between two or more constructs is essential for further testing of construct validity. To confirm construct validity, Spearman’s Rho correlations will be calculated between PHL scores and those obtained from the established instrument for measuring general HL, the HLS-EU-Q16. We expect substantial correlations (r>0.50) between the PHL questionnaire and the HLS-EU-Q16, as both aim to measure HL-related skills. Weaker correlations may occur with the newly introduced ‘communicate’ dimension, reflecting its focus on parental proxy roles. To ensure the construct validity of the newly developed instrument for assessing PHL, we will include the participants’ sociodemographic and socioeconomic data. These variables will allow us to examine whether differences in PHL scores, both overall and at the subscale level, exist between specific subgroups, such as parents with low vs high educational attainment or different socioeconomic status. We will also assess whether discriminant validity exists within these subgroups by comparing PHL scores with general HL scores. By examining measurement invariance, we aim to ensure that the instrument is robust, accurately reflects differences in PHL across different populations and has strong construct validity. Assessment of criterion validity To test criterion validity, we will examine correlations between PHL and specific health-related indicators. The evaluation will employ three well-established instruments. Based on prior studies reporting moderate correlations between HL and health-related indicators, we anticipate a moderate positive correlation in the range of r=0.30 to 0.50. Food Frequency Questionnaire : This instrument will measure the frequency and quantity of the consumption of 48 different food items. We hypothesise that higher PHL scores will be positively associated with healthier dietary patterns, such as increased consumption of fruits and vegetables and reduced intake of sugary foods. Comprehensive Feeding Practices Questionnaire : This tool assesses parental practices concerning their children’s diet. We expect that parents with higher PHL will demonstrate more positive feeding practices, including offering a balanced diet and limiting junk food. ICEpop CAPability measure for Adults : This questionnaire measures the quality of life of adults. We hypothesise that higher PHL scores may be positively associated with better quality-of-life outcomes. At the end of the first validation study, the results will be interpreted and the strengths and weaknesses of the new questionnaire for measuring PHL will be discussed. If necessary, this will lead to modifications of the item pool. A second validation study is then conducted with the same methodological steps using the newly adapted questionnaire. This study aims to develop and psychometrically evaluate a German-language questionnaire for assessing PHL with a focus on the promotion of a healthy lifestyle in children. To accomplish this, the study addresses two main research questions: How can the generic HL framework proposed by Sørensen et al be modified and extended to fit the specific context of parental information behaviours in the context of promoting healthy lifestyles during childhood? What psychometric quality (objectivity, validity and reliability) characterises the newly developed questionnaire? This study will actively engage parents in the research process. Their input and feedback are instrumental in shaping the development of the questionnaire and ensuring its relevance and applicability to real-world contexts. Parents will be engaged during the item pool development phase through cognitive interviews to ensure the questionnaire’s relevance and clarity. They will also provide feedback on the usability of the final instrument. A mixed-methods design will be used to develop and validate the questionnaire on PHL. The methodological approach consists of four steps and is based on the quality standards for the development of measurement questionnaires in social science survey research . During the initial stage of questionnaire development, a thorough list of items is compiled based on a scoping review of current literature. Specifically, the international body of research with regard to the necessary skills of parents in dealing with health information related to a healthy lifestyle in childhood will be mapped. The literature search focuses on original research articles, validation studies, empirical research and concept papers published in German or English. This search is conducted across the CINAHL, EMBASE, PsycINFO and Web of Science databases to encompass a wide array of relevant academic contributions. Based on the empirical findings reviewed so far, it can be assumed that communication with healthcare providers, family members, other caregivers, friends or acquaintances is an important skill area for parents. Therefore, it should be included as a separate subscale in addition to the four dimensions (find, understand, assess and apply). It is planned to combine the central behavioural determinants of a healthy lifestyle in childhood—nutrition, physical activity, media use and sleep—with the dimensions of HL (find, understand, assess, apply, communicate) in an item matrix. This structure, as illustrated in , will serve as the foundation for assessing construct validity in subsequent analyses. A Delphi study will be conducted in three rounds to test the content validity of the underlying item pool. This method will be used to refine and optimise the item pool. Delphi surveys generally have different types and application variations, with other characteristics suitable for various application areas. The core feature of the Delphi method is successive rounds in which participants with different professional backgrounds and expertise are asked to share their knowledge and opinions in a structured survey process to reach a consensus on defined questions in several rounds of revision. Compared with the traditional method, in which participants often remain anonymous, the Group Delphi enables a direct discursive exchange between the participants, which means that any dissent can be discussed and clarified quickly. The Delphi study planned here will consist of three rounds and contain elements of the classic Delphi method and the Group Delphi method . According to Niederberger and Renn, a Group Delphi should involve 12–25 experts. Therefore, the methodological approach aims to involve at least 12 experts. For this study, experts are defined as researchers who have been active in the field of HL, PHL or related topics for at least 3 years. Potential experts will be identified through a search and analysis of publications, conference proceedings and professional networks in the relevant fields. The identified experts will be contacted via personalised emails, which will include detailed information about the research project and the purpose of the Delphi study. Participation will be voluntary, and written consent will be obtained before commencing the study. The first round of the Delphi study consists of an online survey in which the participating experts can rate all items with regard to their relevance and clarity on a four-point Likert scale. For each item, experts should provide open-ended explanations for their answers and, if necessary, recommendations for further development. To measure content validity, experts assess the relevance on a scale of ‘very relevant’, ‘fairly relevant’, ‘not very relevant’ or ‘not relevant at all’. This rating is used to calculate the content validity index (CVI) at both the item (I-CVI) and scale (S-CVI) levels. The first step in calculating the I-CVI involves dichotomising the values into ‘relevant’ (1 point) and ‘not relevant’ (0 points). Next, the points for each item are summed up and divided by the number of experts. Lynn recommends an I-CVI of at least 0.78 for expert ratings greater than or equal to six. For items with an I-CVI above 0.78, open-ended responses are used to adjust the wording and are re-rated by the experts in the next round. Additionally, the Scale Content Validity Index/Average (S-CVI/Ave) calculation aims to assess the content validity of a subscale as a whole. As recommended by Ausserhofer et al and Polit et al , an S-CVI/Ave of 0.90 or higher is deemed acceptable. A value significantly below 0.90 indicates a need for subscale revision. In the subsequent analysis of this first survey round, closed answers will be evaluated quantitatively through descriptive analyses, while open-ended responses will be qualitatively assessed using content analysis, following Kuckartz’s methodology. The experts who participated in the first round will be invited to a 90-min online workshop in the second Delphi round. The main discussion points and arguments will be recorded during the online workshop. The workshop will be followed by a second evaluation phase in which the protocol and the visualised key discussion points will be evaluated using qualitative content analysis, according to Kuckartz. Based on the evaluation of the first Delphi round and the online workshop, the item battery will be revised accordingly. The expert panel will then re-evaluate the modified item pool in the third Delphi round. As with the first round, participants will receive an online questionnaire to evaluate the relevance and appropriateness of the item wording. These evaluations will inform the final specification and optimisation of the item pool. After conducting the Delphi study, the refined item pool will undergo a validity analysis using cognitive interviews with parents. This step should answer the following subquestions : Which questions are not understood or are considered problematic? Are questions perceived as redundant by the respondents? Does the meaning of questions as perceived by respondents match the intended meaning? Are respondents using the full range of the response scale? Do previous questions affect the response to subsequent questions? To this end, a sample of 15–20 parents with preschool-age children will be recruited using the snowball method. The inclusion criteria for the planned interviews require participants to be at least 18 years old, have sufficient written and spoken proficiency in German and have children between the ages of 3 and 6 years. Heterogeneity in terms of age, gender and educational background will be considered in the recruitment process. An interview is expected to take 60–90 min and may be conducted online or in person. Parents’ preferences will be taken into account during the interview process. The interview methods of think-aloud and probing will be used. In the think-aloud method, interviewees are instructed to vocalise any thoughts that come to mind while answering a question. The probing process involves asking targeted questions about the interviewee’s approach to answering the question. The protocol will include probes based on Tourangeau’s four-stage model: question comprehension, memory, judgement and answer selection. In addition to these pre-formulated probes, proactive and reactive probes are formulated depending on the respondents' statements. The cognitive interviews are audio-recorded to ensure focused interviewing and later transcription. The transcribed interview material is evaluated using qualitative content analysis, following Kuckartz’s method. The results are used to further modify the item pool. Additionally, a standardised data analysis sheet will be created. This will allow a structured evaluation by creating a case-specific list of all statements made by the interviewees, as suggested by Prüfer and Rexroth. The validation study will be conducted as an online-based cross-sectional survey using convenience sampling and will have the same inclusion criteria as the cognitive interviews. Recruitment efforts will focus on reaching a diverse sample of German-speaking parents with regard to age, gender and educational background. Potential avenues include daycare centres, parenting groups and organisations that serve families across a broad range of socioeconomic backgrounds. Particular attention will be given to reaching parents from different educational and socioeconomically backgrounds, ensuring varied representation. These efforts will help achieve a sociodemographically diverse sample reflective of the target population. There are varying recommendations for determining the appropriate sample size. The suggested ratio of subjects to items ranges from 3:1 to 20:1. This study follows the recommendation of Nunnally and Bernstein that the ratio should not be less than 10:1. As the number of items has not yet been finalised, the sampling, including the exact number of cases, can only be planned later. Item analysis Item analysis will be conducted to assess item quality using parameters such as item difficulty and selectivity. Descriptive statistics (mean, SD and missing values) will be presented for each item. The distribution will be tested for normality using the Kolmogorov-Smirnov test, alongside visual inspections using Q–Q and P–P plots. Item difficulty will be assessed for potential floor or ceiling effects, aiming for indices between 0.20 and 0.80. Discriminatory power will be calculated via correlation, and items with low discriminatory power or common variance below 10% will be reviewed for possible exclusion. Assessment of construct validity To assess construct validity, we hypothesise that the five HL dimensions (find, understand, assess, apply and communicate) represent independent but interrelated constructs. Similarly, the four health topics (nutrition, physical activity, media use and sleep) are expected to form distinct yet overlapping domains. Exploratory and confirmatory factor analyses will be conducted to test these assumptions. We will examine cross-loadings to identify potential overlaps between dimensions and domains and expect strong within-dimension correlations for items assessing the same HL domain across different health topics. A dimensionality check is crucial for ensuring the test’s dimensions align with the concept definition. Correlation analyses, such as exploratory or confirmatory factor analyses, are used to assign items to corresponding subtests or subdimensions. A recommended approach involves a dual analysis: initially employing principal component analysis (PCA) followed by confirmatory factor analysis (CFA) on two separate parts of the sample with equal proportions. Commencing with PCA helps identify the test’s underlying dimensions by establishing the number and importance of factors. Essential preliminary steps include calculating the Kaiser-Meyer-Olkin criterion to assess data suitability for PCA, with a minimum value of 0.5 recommended, and applying the Bartlett test to examine sphericity. Subsequent factor determination can draw on criteria like eigenvalues greater than one, scree plot or parallel analysis. The factor loadings are determined, and the items are then assigned to their corresponding components. In analysing the factor loading matrix, a factor loading greater than 0.5 is a fundamental requirement for the assignment. Additionally, when a variable loads sufficiently on multiple factors, it is important to consider the geometric aspects of the factors. We will use the Varimax rotation method to simplify the factor structure. Once the components are interpreted, they are tested for their measurement accuracy (reliability). The most well-known measure for determining reliability is Cronbach’s alpha. To ensure adequate reliability and meaningful factor loadings, each subscale (eg, HL dimension per health topic) will include a minimum of three items, aligning with recommendations for robust psychometric scales. In the subsequent phase, the factor structure identified by PCA is verified using CFA with the alternate half of the sample. An iterative estimation procedure is used to estimate the parameters, such as the maximum likelihood method or the generalised least square method. In the exploration of CFA, a crucial aspect of assessing the quality of measurement models lies in examining both local and global fit indices. Local fit indices, including factor reliability (FR), average variance extracted (AVE) and the Fornell-Larcker Criterion, provide insights into the internal consistency of the model, the amount of variance captured by the construct and the discriminant validity between constructs, respectively. These local fit indices are vital for a nuanced understanding of the model’s internal structure. It is essential to interpret these local measures alongside global fit indices such as χ 2 , χ 2 /df, goodness of fit, standardised root mean residual and root mean square error of approximation to achieve a holistic understanding of the model’s fit. Each set of measures illuminates different facets of the model’s quality, ensuring a robust evaluation of its validity and reliability. Deriving and testing hypotheses concerning the relationships between two or more constructs is essential for further testing of construct validity. To confirm construct validity, Spearman’s Rho correlations will be calculated between PHL scores and those obtained from the established instrument for measuring general HL, the HLS-EU-Q16. We expect substantial correlations (r>0.50) between the PHL questionnaire and the HLS-EU-Q16, as both aim to measure HL-related skills. Weaker correlations may occur with the newly introduced ‘communicate’ dimension, reflecting its focus on parental proxy roles. To ensure the construct validity of the newly developed instrument for assessing PHL, we will include the participants’ sociodemographic and socioeconomic data. These variables will allow us to examine whether differences in PHL scores, both overall and at the subscale level, exist between specific subgroups, such as parents with low vs high educational attainment or different socioeconomic status. We will also assess whether discriminant validity exists within these subgroups by comparing PHL scores with general HL scores. By examining measurement invariance, we aim to ensure that the instrument is robust, accurately reflects differences in PHL across different populations and has strong construct validity. Assessment of criterion validity To test criterion validity, we will examine correlations between PHL and specific health-related indicators. The evaluation will employ three well-established instruments. Based on prior studies reporting moderate correlations between HL and health-related indicators, we anticipate a moderate positive correlation in the range of r=0.30 to 0.50. Food Frequency Questionnaire : This instrument will measure the frequency and quantity of the consumption of 48 different food items. We hypothesise that higher PHL scores will be positively associated with healthier dietary patterns, such as increased consumption of fruits and vegetables and reduced intake of sugary foods. Comprehensive Feeding Practices Questionnaire : This tool assesses parental practices concerning their children’s diet. We expect that parents with higher PHL will demonstrate more positive feeding practices, including offering a balanced diet and limiting junk food. ICEpop CAPability measure for Adults : This questionnaire measures the quality of life of adults. We hypothesise that higher PHL scores may be positively associated with better quality-of-life outcomes. At the end of the first validation study, the results will be interpreted and the strengths and weaknesses of the new questionnaire for measuring PHL will be discussed. If necessary, this will lead to modifications of the item pool. A second validation study is then conducted with the same methodological steps using the newly adapted questionnaire. Item analysis will be conducted to assess item quality using parameters such as item difficulty and selectivity. Descriptive statistics (mean, SD and missing values) will be presented for each item. The distribution will be tested for normality using the Kolmogorov-Smirnov test, alongside visual inspections using Q–Q and P–P plots. Item difficulty will be assessed for potential floor or ceiling effects, aiming for indices between 0.20 and 0.80. Discriminatory power will be calculated via correlation, and items with low discriminatory power or common variance below 10% will be reviewed for possible exclusion. To assess construct validity, we hypothesise that the five HL dimensions (find, understand, assess, apply and communicate) represent independent but interrelated constructs. Similarly, the four health topics (nutrition, physical activity, media use and sleep) are expected to form distinct yet overlapping domains. Exploratory and confirmatory factor analyses will be conducted to test these assumptions. We will examine cross-loadings to identify potential overlaps between dimensions and domains and expect strong within-dimension correlations for items assessing the same HL domain across different health topics. A dimensionality check is crucial for ensuring the test’s dimensions align with the concept definition. Correlation analyses, such as exploratory or confirmatory factor analyses, are used to assign items to corresponding subtests or subdimensions. A recommended approach involves a dual analysis: initially employing principal component analysis (PCA) followed by confirmatory factor analysis (CFA) on two separate parts of the sample with equal proportions. Commencing with PCA helps identify the test’s underlying dimensions by establishing the number and importance of factors. Essential preliminary steps include calculating the Kaiser-Meyer-Olkin criterion to assess data suitability for PCA, with a minimum value of 0.5 recommended, and applying the Bartlett test to examine sphericity. Subsequent factor determination can draw on criteria like eigenvalues greater than one, scree plot or parallel analysis. The factor loadings are determined, and the items are then assigned to their corresponding components. In analysing the factor loading matrix, a factor loading greater than 0.5 is a fundamental requirement for the assignment. Additionally, when a variable loads sufficiently on multiple factors, it is important to consider the geometric aspects of the factors. We will use the Varimax rotation method to simplify the factor structure. Once the components are interpreted, they are tested for their measurement accuracy (reliability). The most well-known measure for determining reliability is Cronbach’s alpha. To ensure adequate reliability and meaningful factor loadings, each subscale (eg, HL dimension per health topic) will include a minimum of three items, aligning with recommendations for robust psychometric scales. In the subsequent phase, the factor structure identified by PCA is verified using CFA with the alternate half of the sample. An iterative estimation procedure is used to estimate the parameters, such as the maximum likelihood method or the generalised least square method. In the exploration of CFA, a crucial aspect of assessing the quality of measurement models lies in examining both local and global fit indices. Local fit indices, including factor reliability (FR), average variance extracted (AVE) and the Fornell-Larcker Criterion, provide insights into the internal consistency of the model, the amount of variance captured by the construct and the discriminant validity between constructs, respectively. These local fit indices are vital for a nuanced understanding of the model’s internal structure. It is essential to interpret these local measures alongside global fit indices such as χ 2 , χ 2 /df, goodness of fit, standardised root mean residual and root mean square error of approximation to achieve a holistic understanding of the model’s fit. Each set of measures illuminates different facets of the model’s quality, ensuring a robust evaluation of its validity and reliability. Deriving and testing hypotheses concerning the relationships between two or more constructs is essential for further testing of construct validity. To confirm construct validity, Spearman’s Rho correlations will be calculated between PHL scores and those obtained from the established instrument for measuring general HL, the HLS-EU-Q16. We expect substantial correlations (r>0.50) between the PHL questionnaire and the HLS-EU-Q16, as both aim to measure HL-related skills. Weaker correlations may occur with the newly introduced ‘communicate’ dimension, reflecting its focus on parental proxy roles. To ensure the construct validity of the newly developed instrument for assessing PHL, we will include the participants’ sociodemographic and socioeconomic data. These variables will allow us to examine whether differences in PHL scores, both overall and at the subscale level, exist between specific subgroups, such as parents with low vs high educational attainment or different socioeconomic status. We will also assess whether discriminant validity exists within these subgroups by comparing PHL scores with general HL scores. By examining measurement invariance, we aim to ensure that the instrument is robust, accurately reflects differences in PHL across different populations and has strong construct validity. To test criterion validity, we will examine correlations between PHL and specific health-related indicators. The evaluation will employ three well-established instruments. Based on prior studies reporting moderate correlations between HL and health-related indicators, we anticipate a moderate positive correlation in the range of r=0.30 to 0.50. Food Frequency Questionnaire : This instrument will measure the frequency and quantity of the consumption of 48 different food items. We hypothesise that higher PHL scores will be positively associated with healthier dietary patterns, such as increased consumption of fruits and vegetables and reduced intake of sugary foods. Comprehensive Feeding Practices Questionnaire : This tool assesses parental practices concerning their children’s diet. We expect that parents with higher PHL will demonstrate more positive feeding practices, including offering a balanced diet and limiting junk food. ICEpop CAPability measure for Adults : This questionnaire measures the quality of life of adults. We hypothesise that higher PHL scores may be positively associated with better quality-of-life outcomes. At the end of the first validation study, the results will be interpreted and the strengths and weaknesses of the new questionnaire for measuring PHL will be discussed. If necessary, this will lead to modifications of the item pool. A second validation study is then conducted with the same methodological steps using the newly adapted questionnaire. The presented study received a positive ethics vote from the Ethics Committee of Fulda University of Applied Sciences. Currently, the study is in its third phase of methodological implementation, which involves conducting and analysing cognitive interviews with parents. The questionnaire will be used to evaluate measures to promote PHL. The results of the various methodological steps will be reported in further peer-reviewed publications. Additionally, the results will be presented at national and international scientific conferences.
Should Scotland provide genome-wide sequencing for the diagnosis of rare developmental disorders? A cost-effectiveness analysis
7fc33987-443a-4f8a-a250-3c5289817937
11937054
Pathologic Processes[mh]
This study evaluated the cost effectiveness of six testing strategies for the diagnosis of rare developmental disorders in Scotland. It was found that 2nd-line exome sequencing was less costly and more effective than standard genetic testing. Genome sequencing marginally increased diagnostic yield compared to exome sequencing, but increased costs significantly. NHS Scotland should continue to invest in exome sequencing for the diagnosis of rare genetic conditions, and should monitor the evolving cost and diagnostic yield of genome sequencing over time. Approximately 2–5% of children are born with rare developmental disorders, or manifest symptoms during childhood . Although developmental disorders often have heterogeneous clinical presentations, the most commonly observed features consist of intellectual disability/developmental delay and/or congenital malformations . Many developmental disorders are life-threatening and multi-system, with a profound impact on the quality of life and well-being of patients and families. The combination of the diversity of developmental disorders and the clinical expertise required to diagnose them creates a challenge for publicly funded healthcare systems with increasingly strained budgets. Given that a significant proportion of developmental disorders are believed to have a genetic cause , increasing access to genetic diagnostic testing is a key policy objective of the Scottish Government . Many patients and families with undiagnosed developmental disorders undergo a long, stressful, and costly series of clinical investigations and genetic testing in search of a diagnosis. First-line genetic testing typically involves chromosomal microarray (CMA), and Fragile X testing. When clinical investigations and first-line genetic testing fail to reach a diagnosis, individuals undergo an iterative series of targeted gene panels based on clinical phenotype. Gene panel testing can be a useful diagnostic tool for genetically heterogeneous conditions, or when an individual has clinical features which may fit more than one condition. However, the chance of obtaining a diagnosis from gene panel testing depends on: (i) the clinician requesting the correct gene panel based on the patient’s phenotype; and (ii) the gene causing the rare condition being on the existing gene panel. As a result, the diagnostic yield (proportion of cases receiving a genetic diagnosis) of gene panels may not be optimal . The iterative series of clinical and genetic testing has been labelled the ‘diagnostic odyssey.’ This refers to the time taken between the first presentation at health care services and receiving a correct medical diagnosis . Historically, Scottish patients have waited an average of four years to receive a genetic diagnosis for their rare condition, with many never receiving one . Advancements in genomics offer promising opportunities to end or shorten the diagnostic odyssey for the diagnosis of rare developmental disorders. The comprehensive sequence analysis of a person’s entire genome (genome sequencing – GS), or the protein-coding region of a person’s genome (exome sequencing – ES), may enable quicker diagnosis for rare conditions due to its higher diagnostic yield than standard gene panels . It is well known that over 2,000 genes are associated with developmental disorders . GS and ES offer an opportunity to analyse these genes efficiently. In Scotland, both GS and ES have been offered in a research context. Firstly, 1250 children with undiagnosed developmental disorders were offered ES via the Deciphering Developmental Disorders (DDD) study . A further 385 patients and families were offered panel-based analysis of GS via the Scottish Genomes Partnership’s (SGP) involvement in the UK 100,000 Genomes Project . At present, analysis of the Developmental Disorder Genotype-to-Phenotype (DDG2P) gene panel from exome sequence data is currently offered in NHS Scotland as a specialised diagnostic service to individuals presenting with a severe developmental disorder. The economic evidence on the value for money offered by genome-wide sequencing for rare disease diagnosis is growing . Despite this growing literature, the cost effectiveness of genome-wide sequencing remains highly uncertain, varying significantly depending on the study context, patient population and strategies evaluated. The health economic evidence in a Scottish context is particularly limited, with only one economic evaluation in the United Kingdom (UK) , and none in Scotland. Given that Scotland operates under a devolved health care budget, separate from other parts of the UK, health economic evidence is required to inform the development of a Scottish genomic testing strategy. Abbott et al. present preliminary estimates of the cost of GS versus the standard genetic testing pathway for rare disease diagnosis in Scotland. Trio GS (where trio refers to sequencing DNA samples from a child plus two biological parents) was estimated to cost £6625, compared to £1841 per patient for standard genetic testing. However, the study did not evaluate the cost of alternative genome-wide sequencing options, including trio ES. Additionally, the study stopped short of conducting an incremental cost-effectiveness analysis of alternative strategies in terms of their diagnostic yield. Building on these preliminary findings, this study aims to assess the cost effectiveness of genetic and genomic testing strategies for the diagnosis of rare developmental disorders in Scotland. A decision tree model was developed using TreeAge Pro (TreeAge Pro ® 2021). Six genetic and genomic testing strategies were evaluated for the diagnosis of rare developmental disorders, from a Scottish health care system perspective. Alternative configurations of standard genetic testing, GS and ES were evaluated at varying time points in the diagnostic pathway. The strategies were selected based on plausible alternatives which could be delivered in Scottish clinical practice and were informed and validated by expert clinical opinion. A simplified schematic of the model structure is presented in Fig. . Standard genetic testing refers to the historical ‘diagnostic odyssey’ of chromosomal microarray, Fragile X, single gene, and gene panel testing. Although this strategy was labelled standard genetic testing , it is not currently ‘standard care’ in NHS Scotland, given that a trio ES service is now offered for severe developmental delay. Standard genetic testing was included in the model as a means of estimating the historical cost and diagnostic yield of the diagnostic odyssey and was assumed to involve singleton (proband-only, without samples from parents or relatives) testing. In addition to standard testing, trio GS was evaluated as a last-resort test (after all standard testing fails to reach a diagnosis), as a 2nd-line test (replacing single gene and gene panel testing) and as a 1st-line test (replacing all standard genetic testing). Trio ES was evaluated as a last-resort test and 2nd-line test. 1st-line trio ES was not evaluated because, at the time of model conception and development, expert clinical opinion indicated that chromosomal microarray testing would always occur prior to trio ES. Despite recent developments indicating that trio ES may be able to replace array testing [ – ], there was insufficient data on the cost and diagnostic yield of first-line ES to reliably model this strategy. In all strategies, ES and GS were assumed to involve trio testing, using DNA samples from the proband plus two biological parents. This reflected the diagnosis of rare developmental disorders in Scotland, where the majority of cases involve trio testing. NHS Scotland data from the SGP and DDD research studies and the NHS Scotland DDG2P pipeline, as well as review of the clinical literature, informed the model. Whilst the SGP and DDD studies recruited a broad range of rare disease phenotypes, we focus on rare developmental disorders due to the significantly larger sample size for developmental disorders compared to other rare conditions. This increased the precision of cost, diagnostic yield, and cost-effectiveness estimates. Table summarises the cost and diagnostic yield parameters used in the model. Further information on all model inputs, assumptions and distributions is available in the Online Supplementary Material. In Table , all cost and diagnostic yield estimates apply to individual tests rather than entire testing strategies. For example, the diagnostic yield of last-resort trio GS does not include the yield of prior chromosomal microarray, Fragile X and gene panel testing. Costing standard genetic testing The cost of standard genetic testing was estimated by attaching unit costs to the pre-genomic testing histories of SGP and DDD study participants. Testing histories included chromosomal microarray, Fragile X, single gene tests and gene panels. Genetic testing costs were derived from a workload unit-based method developed by the UK Genetic Testing Network , with each genetic test placed into one of eight costing bands reflecting the complexity of the test in terms of staff time, laboratory inputs and overheads. The cost of clinic visits depended on whether the appointment was with a genetics consultant (£396 per appointment ) or counsellor/nurse, where the Personal Social Services Research Unit (PSSRU) cost per minute was used. Costing trio genome sequencing and exome sequencing The workload unit costing method has not been updated to include GS and ES. Trio GS and ES costs were estimated using a combination of micro-costing at Scotland’s four regional genetics centres and charges to the regional centres from third-party providers. Although charges are not necessarily indicative of costs , the charges from third-party providers reflect the price which NHS Scotland would need to pay to deliver a genome-wide sequencing pipeline, including sequencing and bioinformatic analysis. These charges therefore reflect the opportunity cost of NHS Scotland’s expenditure and the health care system perspective of the analysis. The base-case analysis used trio GS costs based on the SGP study GS pipeline, updated for both inflation and changes in the cost of delivering GS in clinical practice rather than in a research context. The cost of trio ES was estimated using micro-costing at South East Scotland Genetic Service, where trio ES is currently offered for severe developmental disorders. Diagnostic yield Diagnostic yield (the proportion of cases receiving a genetic diagnosis) was used as the clinical effectiveness measure. A combination of primary data from the SGP and DDD studies and the DDG2P pipeline, as well as systematic review data and expert clinical opinion, was used to estimate the diagnostic yield of each strategy. For standard genetic testing, diagnostic yield estimates were obtained from systematic review data and expert clinical opinion. The SGP and DDD research studies provided estimates of the diagnostic yield of trio GS and ES as a last-resort test, where the eligibility criteria typically required exhaustive standard genetic testing with residual unmet diagnostic need. The NHS Scotland DDG2P trio ES pipeline provided insight into the diagnostic yield of trio ES as a 2nd-line test , after chromosomal microarray and/or Fragile X testing had failed to reach a diagnosis. No primary data was available for the diagnostic yield of 1st-line and 2nd-line GS in NHS Scotland; estimates for these strategies were obtained from systematic review data . Cost-effectiveness analysis The incremental cost per additional diagnosis (ICAD) was estimated as: [12pt]{minimal} $$ICAD = - C_{0} }}{{DY_{1} - DY_{0} }}$$ I C A D = C 1 - C 0 D Y 1 - D Y 0 where C 1 and DY 1 are the mean costs and diagnostic yield for a given testing strategy, and C 0 and DY 0 are the mean costs and diagnostic yield of the next-best alternative strategy. Cost-effectiveness results were plotted on an efficiency frontier, connecting strategies that are successively more costly and more effective . Strategies on the efficiency frontier may be considered cost effective, depending on the decision maker’s willingness to pay (WTP) per additional diagnosis. Strategies which are not on the efficiency frontier are ‘inefficient’ or ‘dominated,’ in that an alternative strategy could be implemented with a lower cost and higher effectiveness. Expert clinical opinion indicated that the six genetic and genomic testing strategies could be delivered within three years in clinical practice. The cost effectiveness of each strategy was thus evaluated over a three-year time horizon, starting at the point of referral to the clinical genetics service, and ending after all testing has/has not reached a diagnosis. Following the recommendation of the National Institute for Health and Care Excellence (NICE), all cost and diagnostic yield parameters were discounted at an annual rate of 3.5% . Modelling uncertainty: sensitivity analysis In the base-case analysis, trio GS costs were based on an updated SGP study pipeline, with a cost of £5576 per trio. One-way (deterministic) sensitivity analysis explored the impact of a second GS costing option, involving outsourcing sequencing, bioinformatic analysis and data storage to Genomics England, with a cost of £3781 per trio. A threshold analysis investigated how much each cost or diagnostic yield input would need to increase or decrease before the optimal strategy changes. This analysis requires knowledge of the decision maker’s willingness to pay (WTP) per additional diagnosis. Drawing on methodology used to estimate the value of a statistical life , we estimated an implied WTP based on historical genetic testing policy in Scotland. Dividing the average standard genetic testing cost (£2429) by its diagnostic yield (28.2%) implies that decision-makers have (historically) been willing to pay £8613 per diagnosis. Probabilistic sensitivity analysis (PSA), with a Cost-Effectiveness Acceptability Curve (CEAC) generated for each testing strategy, estimated the probability that each strategy is cost effective for a range of WTP values per additional diagnosis, from £0 to £100,000. In addition to addressing uncertainty in the cost and diagnostic yield of each strategy, this analysis also addresses uncertainty in the decision maker’s WTP per additional diagnosis. The cost of standard genetic testing was estimated by attaching unit costs to the pre-genomic testing histories of SGP and DDD study participants. Testing histories included chromosomal microarray, Fragile X, single gene tests and gene panels. Genetic testing costs were derived from a workload unit-based method developed by the UK Genetic Testing Network , with each genetic test placed into one of eight costing bands reflecting the complexity of the test in terms of staff time, laboratory inputs and overheads. The cost of clinic visits depended on whether the appointment was with a genetics consultant (£396 per appointment ) or counsellor/nurse, where the Personal Social Services Research Unit (PSSRU) cost per minute was used. The workload unit costing method has not been updated to include GS and ES. Trio GS and ES costs were estimated using a combination of micro-costing at Scotland’s four regional genetics centres and charges to the regional centres from third-party providers. Although charges are not necessarily indicative of costs , the charges from third-party providers reflect the price which NHS Scotland would need to pay to deliver a genome-wide sequencing pipeline, including sequencing and bioinformatic analysis. These charges therefore reflect the opportunity cost of NHS Scotland’s expenditure and the health care system perspective of the analysis. The base-case analysis used trio GS costs based on the SGP study GS pipeline, updated for both inflation and changes in the cost of delivering GS in clinical practice rather than in a research context. The cost of trio ES was estimated using micro-costing at South East Scotland Genetic Service, where trio ES is currently offered for severe developmental disorders. Diagnostic yield (the proportion of cases receiving a genetic diagnosis) was used as the clinical effectiveness measure. A combination of primary data from the SGP and DDD studies and the DDG2P pipeline, as well as systematic review data and expert clinical opinion, was used to estimate the diagnostic yield of each strategy. For standard genetic testing, diagnostic yield estimates were obtained from systematic review data and expert clinical opinion. The SGP and DDD research studies provided estimates of the diagnostic yield of trio GS and ES as a last-resort test, where the eligibility criteria typically required exhaustive standard genetic testing with residual unmet diagnostic need. The NHS Scotland DDG2P trio ES pipeline provided insight into the diagnostic yield of trio ES as a 2nd-line test , after chromosomal microarray and/or Fragile X testing had failed to reach a diagnosis. No primary data was available for the diagnostic yield of 1st-line and 2nd-line GS in NHS Scotland; estimates for these strategies were obtained from systematic review data . The incremental cost per additional diagnosis (ICAD) was estimated as: [12pt]{minimal} $$ICAD = - C_{0} }}{{DY_{1} - DY_{0} }}$$ I C A D = C 1 - C 0 D Y 1 - D Y 0 where C 1 and DY 1 are the mean costs and diagnostic yield for a given testing strategy, and C 0 and DY 0 are the mean costs and diagnostic yield of the next-best alternative strategy. Cost-effectiveness results were plotted on an efficiency frontier, connecting strategies that are successively more costly and more effective . Strategies on the efficiency frontier may be considered cost effective, depending on the decision maker’s willingness to pay (WTP) per additional diagnosis. Strategies which are not on the efficiency frontier are ‘inefficient’ or ‘dominated,’ in that an alternative strategy could be implemented with a lower cost and higher effectiveness. Expert clinical opinion indicated that the six genetic and genomic testing strategies could be delivered within three years in clinical practice. The cost effectiveness of each strategy was thus evaluated over a three-year time horizon, starting at the point of referral to the clinical genetics service, and ending after all testing has/has not reached a diagnosis. Following the recommendation of the National Institute for Health and Care Excellence (NICE), all cost and diagnostic yield parameters were discounted at an annual rate of 3.5% . In the base-case analysis, trio GS costs were based on an updated SGP study pipeline, with a cost of £5576 per trio. One-way (deterministic) sensitivity analysis explored the impact of a second GS costing option, involving outsourcing sequencing, bioinformatic analysis and data storage to Genomics England, with a cost of £3781 per trio. A threshold analysis investigated how much each cost or diagnostic yield input would need to increase or decrease before the optimal strategy changes. This analysis requires knowledge of the decision maker’s willingness to pay (WTP) per additional diagnosis. Drawing on methodology used to estimate the value of a statistical life , we estimated an implied WTP based on historical genetic testing policy in Scotland. Dividing the average standard genetic testing cost (£2429) by its diagnostic yield (28.2%) implies that decision-makers have (historically) been willing to pay £8613 per diagnosis. Probabilistic sensitivity analysis (PSA), with a Cost-Effectiveness Acceptability Curve (CEAC) generated for each testing strategy, estimated the probability that each strategy is cost effective for a range of WTP values per additional diagnosis, from £0 to £100,000. In addition to addressing uncertainty in the cost and diagnostic yield of each strategy, this analysis also addresses uncertainty in the decision maker’s WTP per additional diagnosis. Base-case cost effectiveness results are presented in Table , with strategies reported in ascending cost order. Costs and diagnostic yields are reported for the entire strategy (rather than for individual tests). For example, for 2nd-line ES, the cost (£1402) and diagnostic yield (42.1%) include the expected cost and diagnostic yield of prior chromosomal microarray and/or Fragile X testing. The cost and diagnostic yield of each strategy are also discounted at a rate of 3.5% per annum after year 1. As a result, future costs and diagnoses are weighted less than current costs and diagnoses. 2nd-line ES had the lowest cost, at £1402 per trio, with a diagnostic yield of 42.1%. Standard genetic testing was dominated by 2nd-line ES, with a higher cost (£2429) and lower diagnostic yield (28.2%). Compared to 2nd-line ES, last-resort ES had an incremental cost of £1766, with an incremental diagnostic yield of 2.7%, resulting in an incremental cost per additional diagnosis of £65,407. 2nd-line GS was dominated by last-resort ES, with a higher cost per trio and marginally lower diagnostic yield. First-line GS offered an additional 1.2% diagnostic yield compared to last-resort ES, at an additional cost of £2,408 per trio, resulting in an incremental cost of £200,666 per additional diagnosis. Last resort WGS was the most expensive testing strategy, at £6112 per trio. Compared to first-line GS, last-resort GS had an incremental cost of £536, with an additional yield of 0.1%. This gave an incremental cost per additional diagnosis of £536,000. Figure illustrates the expected cost and diagnostic yield of each testing strategy on an efficiency frontier. Strategies which are undominated (2nd-line ES; last-resort ES; 1st-line GS, and last-resort GS) are connected by the orange line, forming the efficiency frontier. These strategies could be considered cost effective, depending on the decision maker’s WTP per additional diagnosis. Strategies which are not on the orange line (standard testing and 2nd-line GS) are ‘inefficient’ or ‘dominated’ strategies, as an alternative strategy could be implemented with a lower cost and higher diagnostic yield. Sensitivity analysis: lower GS costs The base case analysis assumed that trio GS is delivered using an updated version of the trio GS pipeline from the SGP study. To assess the impact of lower GS costs, a second analysis was conducted using a GS cost of £3781 per trio. Table presents the cost-effectiveness results for this lower GS cost. 2nd-line ES remained a cost-saving option compared to standard genetic testing with a cost saving of £7388 per additional diagnosis (£1027/ – 13.9%). Compared to 2nd-line ES, 1st-line GS increased costs by £2379 per trio, and increased yield by 3.9%. This gave an incremental cost per additional diagnosis of £61,000 for 1st-line GS, compared to 2nd-line ES. Last-resort GS had an incremental cost of £1152 compared to 1st-line GS, with an incremental diagnostic yield of 0.1%. This gave an incremental cost of £1,152,000 per additional diagnosis for last-resort GS. Threshold analysis Assuming a WTP of £8613 per additional diagnosis, it was estimated that, everything else equal: The cost of trio GS would need to fall to £1753 per trio before first-line GS becomes cost effective. The cost of single gene tests/panels (including clinic visits) would need to fall to £893 before last-resort ES becomes cost effective. The yield of first-line GS would need to increase to 89% before 1st-line GS becomes cost effective. The yield of second-line GS would need to increase to 87% before second-line GS becomes cost effective. Last-resort GS is never a cost-effective option. The yield of single gene tests and panels would need to increase to 45% before last-resort ES becomes cost effective. The base case analysis assumed that trio GS is delivered using an updated version of the trio GS pipeline from the SGP study. To assess the impact of lower GS costs, a second analysis was conducted using a GS cost of £3781 per trio. Table presents the cost-effectiveness results for this lower GS cost. 2nd-line ES remained a cost-saving option compared to standard genetic testing with a cost saving of £7388 per additional diagnosis (£1027/ – 13.9%). Compared to 2nd-line ES, 1st-line GS increased costs by £2379 per trio, and increased yield by 3.9%. This gave an incremental cost per additional diagnosis of £61,000 for 1st-line GS, compared to 2nd-line ES. Last-resort GS had an incremental cost of £1152 compared to 1st-line GS, with an incremental diagnostic yield of 0.1%. This gave an incremental cost of £1,152,000 per additional diagnosis for last-resort GS. Assuming a WTP of £8613 per additional diagnosis, it was estimated that, everything else equal: The cost of trio GS would need to fall to £1753 per trio before first-line GS becomes cost effective. The cost of single gene tests/panels (including clinic visits) would need to fall to £893 before last-resort ES becomes cost effective. The yield of first-line GS would need to increase to 89% before 1st-line GS becomes cost effective. The yield of second-line GS would need to increase to 87% before second-line GS becomes cost effective. Last-resort GS is never a cost-effective option. The yield of single gene tests and panels would need to increase to 45% before last-resort ES becomes cost effective. The probabilistic sensitivity analysis indicated that, when the base case GS cost of £5576 per trio was used, 2nd-line ES was most likely to be cost effective at WTP values up to £83,000 per additional diagnosis. At the implied WTP value of £8613 per additional diagnosis, 2nd-line ES had a 93% chance of being cost effective. When the lower GS cost of £3781 per trio was used, 2nd-line ES was most likely to be cost effective at WTP values up to £48,000 per additional diagnosis. At the implied WTP value of £8,613 per additional diagnosis, 2nd-line ES had a 73% probability of being cost effective. The cost-effectiveness acceptability curve (CEAC) for each strategy is presented in the Online Supplementary Material. Informing a Scottish genomic testing strategy Compared to standard genetic testing, 2nd-line ES (after 1st-line CMA and Fragile X) resulted in a 13.9% increase in diagnostic yield (from 28.2% to 42.1%) and a £1027 decrease in cost (from £2429 to £1402). Strategies involving GS had significant incremental costs, with minimal improvement in diagnostic yield compared to ES. As a result, the incremental cost per additional diagnosis for first-line GS ranged from £61,000 to £200,666, depending on the GS cost. 2nd-line and last-resort GS were not cost-effective testing strategies compared to first-line GS. The evolving cost and diagnostic yield of GS and ES should be carefully monitored within future health economic analyses. Recent developments, including a press release from Illumina , indicate that its NovaSeq X series may reduce the cost of GS significantly. However, it remains unclear how these reductions in GS costs would translate into service delivery in clinical practice. Given this uncertainty, our threshold analysis is pertinent, highlighting that significant reductions in cost and/or improvements in diagnostic yield are required before 1st-line GS becomes a cost-effective testing strategy. Assuming a WTP of £8613 per additional diagnosis, 1st-line GS would need to cost £1753 per trio or have a yield of 89% before it becomes cost effective. The probabilistic sensitivity analysis indicates that, unless the decision-maker’s WTP per additional diagnosis exceeds £83,000 (or £48,000 for the lower GS cost), 2nd-line ES is most likely to be cost effective for the diagnosis of rare developmental disorders. Comparing our results with other cost-effectiveness analyses of GS for the diagnosis of rare genetic conditions highlights the uncertainty which remains in the value for money offered by genome-wide sequencing. Although our results for trio ES are broadly in line with other studies, finding that early initiation of ES offers substantial cost savings relative to standard genetic testing [ – ], the health economic literature has not reached a consensus regarding the cost effectiveness of GS relative to ES . While some studies found that GS ranged from cost-neutral to cost-saving [ – ], others indicated that GS has substantial incremental costs with only modest improvements in diagnostic yield . The heterogeneity in the economic evaluation literature likely reflects the structural, methodological and contextual heterogeneity across studies. The cost effectiveness of GS, ES and standard genetic testing is often estimated using disparate methodological frameworks (cost-effectiveness, cost-utility or cost–benefit analysis), applied to various contexts and patient populations (specific rare conditions, all rare conditions, developmental delay), with inconsistent baseline comparators (comparison with the next-best alternative, standard testing or no testing). This makes it challenging to draw broad comparisons between our results and the economic evaluation literature as a whole. However, it highlights the importance of noting the specific context to which our results apply; the diagnosis of rare developmental disorders in NHS Scotland. We find that, within this context, trio ES offers substantial cost savings relative to standard testing, while trio GS has significant incremental costs and minimal improvement in diagnostic yield relative to ES. Limitations This study evaluated six alternative genetic and genomic testing strategies. These strategies reflected plausible alternatives which could be delivered in Scottish clinical practice and were informed and validated using expert clinical genetics opinion. However, the strategies evaluated may not be exhaustive. For example, trio ES was not evaluated as a first-line test. Recent developments in the ability of trio ES to detect copy-number variations (CNVs) may reduce the need for first-line array testing, making first-line ES a plausible alternative [ – ]. Despite this development, the data on the cost and diagnostic yield of first-line ES was insufficient to model it as a comparator. Additionally, a significant proportion of patients remain undiagnosed after trio ES. It is currently unclear whether these patients could receive trio GS following non-diagnostic ES. Marshall et al. (2017) described “multiple cascading uncertainties” associated with the economic evaluation of genome-wide sequencing technologies . Among these uncertainties is the evolving cost of GS and ES. We estimated that, using a similar pipeline to the SGP research study, trio GS would cost between £3,781 and £5,576 depending on the sequencing provider. In addition to reductions in the cost of sequencing, improvements have been made in several key areas of GS which may reduce costs compared to the SGP research study. These include reductions in data storage costs, more efficient bioinformatics pipelines and reduced clinical scientist time required for variant interpretation and analysis. As a result, it is unclear whether the trio GS costs estimated in this study reflect current GS costs in a clinical context. In particular, the base-case trio GS costs used in this study were based on early experience of GS in a research study context. However, several of the potential GS cost reductions may also apply to trio ES. Improvements in diagnostic yield would also change the cost-effectiveness results. The diagnostic yield of both GS and ES are likely to improve as we learn more about the structure and function of the human genome. This study used diagnostic yield as a measure of ‘effectiveness’ of alternative testing strategies. Diagnostic yield was chosen as, within the National Services Division (NSD) of NHS Scotland, ‘value for money’ decisions for genetic testing are often made based on cost effectiveness, with a focus on diagnostic yield, as well as budget impact considerations . Additionally, qualitative pilot work in a small sample of SGP and DDD study participants indicated that quality-adjusted life years (QALYs), the standard health economic utility measure, may not be sensitive to changes in patients’ and families’ utility. However, recent economic evaluations of genome-wide sequencing have employed cost-utility analysis (CUA), using QALYs as the outcome measure [ , , ]. These studies have assessed the value for money offered by genome-wide sequencing in terms of long-term costs and consequences such as changes in clinical management, avoided tests and clinic visits, and increases in life expectancy. The focus on diagnostic yield using a CEA framework does not account for these long-term outcomes. Whilst the chance of diagnosis is clearly fundamental, focusing on diagnostic yield alone also fails to account for the broader value of genome-wide sequencing to patients and families with rare conditions. Previous research has found that service users value a wide range of clinical, informational, process and psychological factors associated with GS, beyond the chance of diagnosis [ , – ]. This includes changes in clinical management, access to support and services, information for family planning, waiting times for genetic testing results, relief, peace of mind and closure. Within the economic evaluation literature, recent studies have attempted to value these broader factors using patient preference data and cost–benefit analysis (CBA) modelling . In future research, we will use cost–benefit analysis to evaluate the broader utility of genome-wide sequencing to patients and families with undiagnosed rare conditions. Compared to standard genetic testing, 2nd-line ES (after 1st-line CMA and Fragile X) resulted in a 13.9% increase in diagnostic yield (from 28.2% to 42.1%) and a £1027 decrease in cost (from £2429 to £1402). Strategies involving GS had significant incremental costs, with minimal improvement in diagnostic yield compared to ES. As a result, the incremental cost per additional diagnosis for first-line GS ranged from £61,000 to £200,666, depending on the GS cost. 2nd-line and last-resort GS were not cost-effective testing strategies compared to first-line GS. The evolving cost and diagnostic yield of GS and ES should be carefully monitored within future health economic analyses. Recent developments, including a press release from Illumina , indicate that its NovaSeq X series may reduce the cost of GS significantly. However, it remains unclear how these reductions in GS costs would translate into service delivery in clinical practice. Given this uncertainty, our threshold analysis is pertinent, highlighting that significant reductions in cost and/or improvements in diagnostic yield are required before 1st-line GS becomes a cost-effective testing strategy. Assuming a WTP of £8613 per additional diagnosis, 1st-line GS would need to cost £1753 per trio or have a yield of 89% before it becomes cost effective. The probabilistic sensitivity analysis indicates that, unless the decision-maker’s WTP per additional diagnosis exceeds £83,000 (or £48,000 for the lower GS cost), 2nd-line ES is most likely to be cost effective for the diagnosis of rare developmental disorders. Comparing our results with other cost-effectiveness analyses of GS for the diagnosis of rare genetic conditions highlights the uncertainty which remains in the value for money offered by genome-wide sequencing. Although our results for trio ES are broadly in line with other studies, finding that early initiation of ES offers substantial cost savings relative to standard genetic testing [ – ], the health economic literature has not reached a consensus regarding the cost effectiveness of GS relative to ES . While some studies found that GS ranged from cost-neutral to cost-saving [ – ], others indicated that GS has substantial incremental costs with only modest improvements in diagnostic yield . The heterogeneity in the economic evaluation literature likely reflects the structural, methodological and contextual heterogeneity across studies. The cost effectiveness of GS, ES and standard genetic testing is often estimated using disparate methodological frameworks (cost-effectiveness, cost-utility or cost–benefit analysis), applied to various contexts and patient populations (specific rare conditions, all rare conditions, developmental delay), with inconsistent baseline comparators (comparison with the next-best alternative, standard testing or no testing). This makes it challenging to draw broad comparisons between our results and the economic evaluation literature as a whole. However, it highlights the importance of noting the specific context to which our results apply; the diagnosis of rare developmental disorders in NHS Scotland. We find that, within this context, trio ES offers substantial cost savings relative to standard testing, while trio GS has significant incremental costs and minimal improvement in diagnostic yield relative to ES. This study evaluated six alternative genetic and genomic testing strategies. These strategies reflected plausible alternatives which could be delivered in Scottish clinical practice and were informed and validated using expert clinical genetics opinion. However, the strategies evaluated may not be exhaustive. For example, trio ES was not evaluated as a first-line test. Recent developments in the ability of trio ES to detect copy-number variations (CNVs) may reduce the need for first-line array testing, making first-line ES a plausible alternative [ – ]. Despite this development, the data on the cost and diagnostic yield of first-line ES was insufficient to model it as a comparator. Additionally, a significant proportion of patients remain undiagnosed after trio ES. It is currently unclear whether these patients could receive trio GS following non-diagnostic ES. Marshall et al. (2017) described “multiple cascading uncertainties” associated with the economic evaluation of genome-wide sequencing technologies . Among these uncertainties is the evolving cost of GS and ES. We estimated that, using a similar pipeline to the SGP research study, trio GS would cost between £3,781 and £5,576 depending on the sequencing provider. In addition to reductions in the cost of sequencing, improvements have been made in several key areas of GS which may reduce costs compared to the SGP research study. These include reductions in data storage costs, more efficient bioinformatics pipelines and reduced clinical scientist time required for variant interpretation and analysis. As a result, it is unclear whether the trio GS costs estimated in this study reflect current GS costs in a clinical context. In particular, the base-case trio GS costs used in this study were based on early experience of GS in a research study context. However, several of the potential GS cost reductions may also apply to trio ES. Improvements in diagnostic yield would also change the cost-effectiveness results. The diagnostic yield of both GS and ES are likely to improve as we learn more about the structure and function of the human genome. This study used diagnostic yield as a measure of ‘effectiveness’ of alternative testing strategies. Diagnostic yield was chosen as, within the National Services Division (NSD) of NHS Scotland, ‘value for money’ decisions for genetic testing are often made based on cost effectiveness, with a focus on diagnostic yield, as well as budget impact considerations . Additionally, qualitative pilot work in a small sample of SGP and DDD study participants indicated that quality-adjusted life years (QALYs), the standard health economic utility measure, may not be sensitive to changes in patients’ and families’ utility. However, recent economic evaluations of genome-wide sequencing have employed cost-utility analysis (CUA), using QALYs as the outcome measure [ , , ]. These studies have assessed the value for money offered by genome-wide sequencing in terms of long-term costs and consequences such as changes in clinical management, avoided tests and clinic visits, and increases in life expectancy. The focus on diagnostic yield using a CEA framework does not account for these long-term outcomes. Whilst the chance of diagnosis is clearly fundamental, focusing on diagnostic yield alone also fails to account for the broader value of genome-wide sequencing to patients and families with rare conditions. Previous research has found that service users value a wide range of clinical, informational, process and psychological factors associated with GS, beyond the chance of diagnosis [ , – ]. This includes changes in clinical management, access to support and services, information for family planning, waiting times for genetic testing results, relief, peace of mind and closure. Within the economic evaluation literature, recent studies have attempted to value these broader factors using patient preference data and cost–benefit analysis (CBA) modelling . In future research, we will use cost–benefit analysis to evaluate the broader utility of genome-wide sequencing to patients and families with undiagnosed rare conditions. The Scottish NHS is currently considering which genetic and genomic testing services to provide for the diagnosis of rare developmental disorders. This study found that offering trio ES as a second-line test (after CMA, but replacing gene panels) is a cost-saving option for the Scottish NHS, compared to the diagnostic odyssey of genetic testing. For strategies involving GS, despite the small increase in diagnostic yield, costs increased significantly. This suggests that, at present, WTP per additional diagnosis would need to be £48,000–£83,000 (depending on trio GS costs) to justify the additional cost of GS, compared to ES. Whilst several areas for future research have been identified, our results remain useful in planning a Scottish genetic and genomic testing strategy for undiagnosed developmental disorders over the short- to medium-term. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 255 KB)
Unveiling the molecular mechanisms of Danggui-Shaoyao-San against Alzheimer’s disease in APP/PS1 mice via integrating proteomic and metabolomic approaches
112bc0da-977b-40bb-86d9-fbdd12954407
11577849
Biochemistry[mh]
Alzheimer’s disease (AD) is the most common cause of dementia and poses a substantial global health challenge. According to a recent report estimating the global prevalence of dementia, 152.8 million people will suffer from dementia by 2050 , and AD will account for approximately 50–70% of the cases . Although recent advances in understanding AD pathophysiology have highlighted the vital role of amyloid-beta (Aβ), hyperphosphorylated tau, neuroinflammation, and mitochondrial dysfunction , the complex pathogenesis of AD remains unclear and there are no definitive therapies against AD. Consequently, other potential pathogenic factors such as brain energy metabolism (EM) have attracted increasing attention. Emerging evidence indicates deficient EM in the brains of patients with AD . For thousands of years, traditional Chinese medicine (TCM) has played a crucial role in the prevention and treatment of diseases, including AD. The classical formula danggui-shaoyao-san (DSS) ameliorates AD through multiple biological pathways [ – ]. For instance, it has been suggested that DSS alters lipid metabolism through the activation of LXR-PPAR-γ and alleviated mucosal barrier dysfunction assessed with a broad range of techniques, resulting in the attenuation of cognitive impairment through the microbiota-gut-brain axis . Furthermore, evidence from the literature reveals that the combination of DSS and its decomposed recipes can decrease the abundance of harmful gut microbiota, resulting in improvements in cognitive and learning skills . Our previous study applied a systems pharmacology-based approach to investigate the mechanisms of DSS prescription for the treatment of AD . However, the specific mechanism of action (MOA) of DSS in AD has not been fully elucidated. Glycogen synthase kinase 3β (GSK3β) is a potential therapeutic target against AD , and previous studies have confirmed that a GSK3β inhibitor (Li 2 CO 3 ) alters hippocampal energy metabolism . Peroxisome proliferator-activated receptor gamma coactivator l alpha (PGC-1α) is a key regulator of mitochondrial function, and an in vivo and vitro study indicated that the GSK3β/PGC-1α axis may play an important role in neuronal metabolic integrity . Over the past few decades, systems biology-driven omics strategies (e.g. metabolomics and proteomics) combined with systems pharmacology have been widely applied to decipher the therapeutic MOA of TCM against complex diseases . Indeed, a comprehensive exploration of the multi-dimensional relationships and interactions between metabolites and proteins in TCM research could provide a broad scope for the modernisation and development of TCM. Recent metabolomic and proteomic analyses have revealed the neuroprotective mechanisms of defatted walnut powder against scopolamine-induced AD in mice . Moreover, an in vivo study revealed that Huanglian ointment (HLO) promoted Staphylococcus aureus -infected wound healing through multi-omics strategies, providing a theoretical basis for the treatment of wounds after anal fistula surgery with HLO . In this study, proteomic and metabolomic approaches as well as in vivo experiments were used to investigate the MOA of DSS against AD. First, an in vivo experiment was conducted to validate the pharmacodynamic effect of DSS on AD (Fig. A). The herbal ingredients of DSS and AD genes from multiple authoritative databases were integrated to construct a drug-target network (Fig. B). We evaluated the absorption, distribution, metabolism, excretion, and toxicity (ADMET) properties to screen the main ingredients of DSS (Fig. B). Subsequently, we used proteomics and targeted energy metabolomics to investigate the MOA of DSS in AD (Fig. C). Finally, further mechanistic verification of the EM was performed (Fig. D). Manual curation of AD disease genes We first integrated 299 AD-related genes based on a previous study . To further extend the disease genes, we searched for the keyword “Alzheimer’s disease” in three databases: GeneCards ( https://www.genecards.org/ ), DisGeNET ( https://www.disgenet.org/ ), and Opentarget ( https://platform.opentargets.org/ ). We reserved only AD genes in the GeneCards database with a score ≥ 60 and in the DisGeNET database with a score > 0.3. The top 100 genes in the Opentarget database were retained. After removing the duplicates, 433 AD genes were identified (Table ). Integration of drug-target network of DSS In 2019, our previous study integrated 19,293 drug-target (D-T) interactions (DTIs) comprising 937 unique compounds and 490 target proteins . To enlarge the DTIs, we acquired the herbal ingredients and related targets from the ITCM database ( http://itcm.biotcm.net/ ) by searching for herb names and integrated 3,492 DTIs from a previous publication . Among the three sources of DTIs mentioned above, only ingredients with clear CID numbers and with a focus on AD genes were preserved. After merging the duplicate DTIs, 6,514 DTIs connecting 1,118 herbal ingredients and 218 AD genes were identified (Table ). ADMET properties evaluation ADMET properties were evaluated using ADMETlab 3.0 ( https://admetlab3.scbdd.com ). ADMETlab 3.0. provides a comprehensive and efficient platform for assessing ADMET-related parameters, physicochemical properties, and medicinal chemistry . In this study, three parameters, including intestinal absorption (HIA), blood-brain barrier (BBB), and plasma protein binding (PPB), were used to screen the core ingredients in the DSS formula. HIA and BBB values < 0.3 were considered good, whereas PPB ≤ 0.9 was considered excellent. In vivo experimental verification Preparation of DSS The DSS was composed of Danggui (9 g), Shaoyao (48 g), Zexie (24 g), Chuanxiong (24 g), Baizhu (12 g), and Fuling (12 g) and was purchased from the Guangdong Provincial Hospital of Chinese Medicine. The low, medium, and high doses of DSS groups were equivalent to 1.6 g/kg, 3.2 g/kg, and 6.4 g/kg of raw material, respectively. Animal and drug administration Male amyloid precursor protein/presenilin-1 (APP/PS1) mice (3 months old, 25 ± 5 g) were supplied by Jiangsu Lingfei Biotechnology Co. Previous studies have shown that amyloid plaques in APP/PS1 mice begin to emerge in the cortex at approximately 4 months of age and in the hippocampus at approximately 6 months, and increase in size and number with age . All in vivo interventions were approved by the Animal Ethics Committee of Guangzhou University of Chinese Medicine (No. 20230405004). All mice were fed ad libitum food and water and housed under a 12 h light/dark (L/D) cycle, a constant room temperature (24 ± 2 °C) and relative air moisture (55% ± 15%). APP/PS1 mice were randomly divided into six groups ( n = 12) after management with ear tags: (1) APP/PS1, model group; (2) APP/PS1 + DSS (6.4 g/kg), DH group; (3) APP/PS1 + DSS (3.2 g/kg), DM group; (4) APP/PS1 + DSS (1.6 g/kg), DL group; (5) APP/PS1 + donepezil (2 mg/kg), positive control group; and (6) APP/PS1 + Li 2 CO 3 (300 mg/kg), inhibitor group. DSS was administered to mice for 75 consecutive days by gavage at 0.1 mL/10 g of dosing volume after adaptive feeding for 7 days. Wild-type (WT) littermates received the same volume of ddH 2 O as control mice. Behaviour test During the experiment, we strictly followed the test procedure and performed behavioural test without excessive human intervention. To reduce bias as much as possible, the entire behavioural experiment was conducted by three different experimenters, and all results were handled using rigorous statistical methods. Y-maze The Y-maze consisted of three arms, and each arm was positioned at a 120° angle, measuring 30 cm × 8 cm × 15 cm (L × W × H). Different geometric shapes were attached to each arm of the maze as visual markers. Each mouse was placed on the same arm (i.e., the starting arm) and subjected to continuous autonomous alternation for 5 min to determine the rate of autonomous alternation. A 75% alcohol solution was used to eliminate odours in the Y-maze arm before the next experiment was performed. During the process, the following indicators were recorded: (1) the total number of entries, the number of times the animals entered the arms of the maze, and (2) alternation, the sequential and consecutive entry into all three arms of the Y-maze. The percentage of alternations was calculated as follows: [total number of alternations / (total number of arm entries − 2)]×100. Morris water maze On day 0, the mice were acclimatised, whereas on days 1−5, the time between placing the mice in the pool and finding the platform (escape latency), as well as the mice’s swimming trajectory, were recorded. If the mice could not find the platform within 60 s, they were guided to the platform and left for 30 s to learn and remember the location. Manipulation was continued for 5 days, and the platform was removed on day 6. The locations of mice were recorded, and they were positioned in the water facing the pool wall and allowed to swim freely for 60 s before being rescued. The number of times the mice traversed the platform while swimming and the time spent in the platform quadrant were recorded. Brain section staining Haematoxylin and eosin (H&E) staining Mouse brain tissues were gradient dehydrated, made transparent, embedded in paraffin, sectioned, dewaxed, and stained with H&E. The stained sections were dehydrated, made transparent, sealed, and observed under a microscope. Nissl staining Mouse brain tissue was subjected to gradient dehydration, rinsed three times with distilled water for 5 min each, placed in a 60 °C temperature chamber, and stained with 1% toluidine blue for 40 min. The dye was washed with distilled water and dehydrated using 70%, 80%, 95%, and 100% ethanol. The brain tissues were then made transparent with xylene and the slices were sealed with neutral gum. Finally, pathological changes in the hippocampal tissues were observed under a microscope. Mitochondrial function analysis The mitochondrial membrane potential (MMP) in purified mitochondria derived from mouse brain tissue was detected with a JC-1 kit (Beyotime, C2006, China), which examined the red/green fluorescence intensity using a fluorescence microplate reader. Fresh mouse brain tissue (20 mg) was harvested to determine the ATP content using a commercial assay kit (Beyotime, S0027, China). Mouse serum (20 µl) obtained from each mouse was harvested to measure NADH content using an NADH assay kit (Meimian, MM445162M1, China) according to the manufacturer’s instructions. Mitochondrial complex I (JL49024), complex II (JL49033), complex III (JL47673), and complex IV (JL18155) kits (Jianglai, China) were used to detect the contents of mitochondrial complexes I–IV. Oxidative stress analysis A mitochondrial isolation kit (Beyotime, C3606, China) was used for mitochondrial purification. Relative mitochondrial reactive oxygen species (ROS) in mouse brain tissues in each group were detected using a mitochondrial ROS assay kit (Jiancheng Institute, E004-1-1, China). Mouse serums (20 µl) from each mouse were harvested to measure the T-SOD content using a commercial assay kit (Jiancheng Institute, A001-1-2, China) according to the manufacturer’s instructions. A TBA method malondialdehyde (MDA) kit (Jiancheng Institute, A003-1-2, China) was adopted to detect the MDA content in mouse serum (10 µl). Quantitative real-time PCR analysis Total RNA was extracted from mouse brain tissue using a total RNA kit (Accurate Biology, AC21102, China), which was then reverse-transcribed into cDNA using a cDNA synthesis kit (Accurate Biology, AG11706, China). qPCR analysis was performed using the Bio-Rad CFX96 system to calculate relative mRNA levels using the 2-ΔΔCt method after normalisation with a housekeeping control, β-actin . The primer sequences used are listed in Table . Western blot (WB) analysis The primary antibodies used in the WB analysis were as follows: GSK3β (10D6) rabbit monoclonal antibody (1:1000, RM4035, Biodragon, China), phospho-GSK3β (Ser9) antibody (1:1000, TA2016S, Abmart, China), PGC1α rabbit pAb antibody (1:2000, A12348, ABclonal, China), brain-derived neurotrophic factor (BDNF) mouse mAb antibody (1:6000, A12348, ABclonal, China), GLUT4 rabbit monoclonal antibody (1:1000, BD-PT5523, Biodragon, China), anti-glucose transporter GLUT1 antibody (1:1000, ab115730, Abcam, the United Kingdom), β-Actin rabbit mAb antibody (1:50000, AC038, ABclonal, China), β-Actin antibody (1:2000, T0022, Affinity, China). Secondary antibodies against goat anti-rabbit IgG-horseradish peroxidase (1:2000, S0001) and goat anti-mouse IgG-horseradish peroxidase (1:2000, S0002) were obtained from Affinity (Shanghai, China). Immunoblots were visualised using an ECL WB detection kit (Biosharp, BL520B, China) and quantified using the High Sensitivity Chemiluminescent Imaging System (Bio-Rad, Hercules, CHEMIDOCXRS, USA). Densitometry analysis was performed using the Image Lab software (Bio-Rad Laboratories). Thioflavin S (TS) staining TS staining allows the visualisation of amyloid (Aβ) plaques. Brain tissue sections were de-paraffinised and washed for 5 min/3 times. A circle was drawn and stained with DAPI for 8 min. TS was prepared with 50% alcohol at a concentration of 0.3%, incubated at room temperature for 8 min, washed with 80% alcohol for 10 s, stained twice drop-wise for 10 s, rinsed once with pure water, sealed, and observed under a fluorescence microscope. 4D-FastDIA-based quantitative proteomics analysis Sample preparation Hippocampal tissue samples ( n = 4) were ground with liquid nitrogen in cell powder. Four volumes of lysis buffer were added to the cell powder followed by sonication on ice for 3 min using a high-intensity ultrasonic processor (Scientz). After centrifugation at 12,000 × g at 4 °C for 10 min, the supernatant was collected, and the protein concentration was determined using the BCA kit according to the manufacturer’s instructions. Proteomics data analysis Proteomic experiments were conducted by Jingjie PTM BioLab Ltd. (Hangzhou, China). The original data obtained from LC-MS/MS were imported into the Proteome Discoverer software. The standard fold change (FC) > 1.2 or < 0.83 and p < 0.05 were used to screen differentially expressed proteins (DEPs). All DEPs were annotated using Blast2GO for Gene Ontology (GO) function and then analysed using Fisher’s exact test for GO functional enrichment and bioinformatics. KEGG pathway enrichment analysis of significantly DEPs was also performed to decipher the metabolic or signalling pathways in which these DEPs might participate. Targeted metabolomic analysis based on EM Sample preparation and extraction Mouse sera ( n = 6) from each group were obtained after the blood samples were centrifuged at 3000 rpm at 4 °C for 10 min and subsequently stored at -80 °C for targeted metabolomic analysis. Blood serum samples were thawed and vortexed for 10 s; 250 µL of 20% acetonitrile/methanol was added to the sample, vortexed for 3 min, and centrifuged at 12,000 r/min for 10 min at 4 °C. After centrifugation, 200 µL of the supernatant was collected using a protein precipitation plate for further LC-MS analysis. Targeted metabolomic analysis was performed by the Wuhan Metware Biotechnology Co. Ltd. Data processing analysis Data analyses, including unsupervised principal component analysis (PCA) and orthogonal projections to latent structure discriminant analysis (OPLS-DA), were performed. PCA was applied to determine whether abnormal data were present, whereas PLS-DA was performed to investigate differences between the groups. Two hundred permutation tests were performed in which the intercepts of R 2 and Q 2 were used to confirm whether there was overfitting of the data to estimate the reliability and stability of the established model. Metabolites with fold change (FC) ≥ 1.5 or FC < 0.67 or VIP score > 1 and p values ( p ≤ 0.05) were considered significantly different. Network visualisation and statistical analysis The D-T and metabolite-gene networks were constructed and analysed using Gephi (v0.9.2, https://gephi.org/ ) and Cytoscape (v3.2.1, http://www.cytoscape.org/ ). All experimental values, expressed as mean ± standard error of the mean (SEM), were analysed using the Statistical Package for the SPSS version 26.0. If they met the normal distribution, one-way analysis of variance (ANOVA) was performed, and the LSD method was applied for multiple comparisons of the means of each group if the variance was satisfied. Tamhane’s T2 method was used when variances were not aligned. If it did not meet the normal distribution, the values were subjected to non-parametric independent samples Kruskal–Wallis test. Differences were considered statistically significant at P < 0.05. We first integrated 299 AD-related genes based on a previous study . To further extend the disease genes, we searched for the keyword “Alzheimer’s disease” in three databases: GeneCards ( https://www.genecards.org/ ), DisGeNET ( https://www.disgenet.org/ ), and Opentarget ( https://platform.opentargets.org/ ). We reserved only AD genes in the GeneCards database with a score ≥ 60 and in the DisGeNET database with a score > 0.3. The top 100 genes in the Opentarget database were retained. After removing the duplicates, 433 AD genes were identified (Table ). In 2019, our previous study integrated 19,293 drug-target (D-T) interactions (DTIs) comprising 937 unique compounds and 490 target proteins . To enlarge the DTIs, we acquired the herbal ingredients and related targets from the ITCM database ( http://itcm.biotcm.net/ ) by searching for herb names and integrated 3,492 DTIs from a previous publication . Among the three sources of DTIs mentioned above, only ingredients with clear CID numbers and with a focus on AD genes were preserved. After merging the duplicate DTIs, 6,514 DTIs connecting 1,118 herbal ingredients and 218 AD genes were identified (Table ). ADMET properties were evaluated using ADMETlab 3.0 ( https://admetlab3.scbdd.com ). ADMETlab 3.0. provides a comprehensive and efficient platform for assessing ADMET-related parameters, physicochemical properties, and medicinal chemistry . In this study, three parameters, including intestinal absorption (HIA), blood-brain barrier (BBB), and plasma protein binding (PPB), were used to screen the core ingredients in the DSS formula. HIA and BBB values < 0.3 were considered good, whereas PPB ≤ 0.9 was considered excellent. Preparation of DSS The DSS was composed of Danggui (9 g), Shaoyao (48 g), Zexie (24 g), Chuanxiong (24 g), Baizhu (12 g), and Fuling (12 g) and was purchased from the Guangdong Provincial Hospital of Chinese Medicine. The low, medium, and high doses of DSS groups were equivalent to 1.6 g/kg, 3.2 g/kg, and 6.4 g/kg of raw material, respectively. Animal and drug administration Male amyloid precursor protein/presenilin-1 (APP/PS1) mice (3 months old, 25 ± 5 g) were supplied by Jiangsu Lingfei Biotechnology Co. Previous studies have shown that amyloid plaques in APP/PS1 mice begin to emerge in the cortex at approximately 4 months of age and in the hippocampus at approximately 6 months, and increase in size and number with age . All in vivo interventions were approved by the Animal Ethics Committee of Guangzhou University of Chinese Medicine (No. 20230405004). All mice were fed ad libitum food and water and housed under a 12 h light/dark (L/D) cycle, a constant room temperature (24 ± 2 °C) and relative air moisture (55% ± 15%). APP/PS1 mice were randomly divided into six groups ( n = 12) after management with ear tags: (1) APP/PS1, model group; (2) APP/PS1 + DSS (6.4 g/kg), DH group; (3) APP/PS1 + DSS (3.2 g/kg), DM group; (4) APP/PS1 + DSS (1.6 g/kg), DL group; (5) APP/PS1 + donepezil (2 mg/kg), positive control group; and (6) APP/PS1 + Li 2 CO 3 (300 mg/kg), inhibitor group. DSS was administered to mice for 75 consecutive days by gavage at 0.1 mL/10 g of dosing volume after adaptive feeding for 7 days. Wild-type (WT) littermates received the same volume of ddH 2 O as control mice. Behaviour test During the experiment, we strictly followed the test procedure and performed behavioural test without excessive human intervention. To reduce bias as much as possible, the entire behavioural experiment was conducted by three different experimenters, and all results were handled using rigorous statistical methods. Y-maze The Y-maze consisted of three arms, and each arm was positioned at a 120° angle, measuring 30 cm × 8 cm × 15 cm (L × W × H). Different geometric shapes were attached to each arm of the maze as visual markers. Each mouse was placed on the same arm (i.e., the starting arm) and subjected to continuous autonomous alternation for 5 min to determine the rate of autonomous alternation. A 75% alcohol solution was used to eliminate odours in the Y-maze arm before the next experiment was performed. During the process, the following indicators were recorded: (1) the total number of entries, the number of times the animals entered the arms of the maze, and (2) alternation, the sequential and consecutive entry into all three arms of the Y-maze. The percentage of alternations was calculated as follows: [total number of alternations / (total number of arm entries − 2)]×100. Morris water maze On day 0, the mice were acclimatised, whereas on days 1−5, the time between placing the mice in the pool and finding the platform (escape latency), as well as the mice’s swimming trajectory, were recorded. If the mice could not find the platform within 60 s, they were guided to the platform and left for 30 s to learn and remember the location. Manipulation was continued for 5 days, and the platform was removed on day 6. The locations of mice were recorded, and they were positioned in the water facing the pool wall and allowed to swim freely for 60 s before being rescued. The number of times the mice traversed the platform while swimming and the time spent in the platform quadrant were recorded. Brain section staining Haematoxylin and eosin (H&E) staining Mouse brain tissues were gradient dehydrated, made transparent, embedded in paraffin, sectioned, dewaxed, and stained with H&E. The stained sections were dehydrated, made transparent, sealed, and observed under a microscope. Nissl staining Mouse brain tissue was subjected to gradient dehydration, rinsed three times with distilled water for 5 min each, placed in a 60 °C temperature chamber, and stained with 1% toluidine blue for 40 min. The dye was washed with distilled water and dehydrated using 70%, 80%, 95%, and 100% ethanol. The brain tissues were then made transparent with xylene and the slices were sealed with neutral gum. Finally, pathological changes in the hippocampal tissues were observed under a microscope. Mitochondrial function analysis The mitochondrial membrane potential (MMP) in purified mitochondria derived from mouse brain tissue was detected with a JC-1 kit (Beyotime, C2006, China), which examined the red/green fluorescence intensity using a fluorescence microplate reader. Fresh mouse brain tissue (20 mg) was harvested to determine the ATP content using a commercial assay kit (Beyotime, S0027, China). Mouse serum (20 µl) obtained from each mouse was harvested to measure NADH content using an NADH assay kit (Meimian, MM445162M1, China) according to the manufacturer’s instructions. Mitochondrial complex I (JL49024), complex II (JL49033), complex III (JL47673), and complex IV (JL18155) kits (Jianglai, China) were used to detect the contents of mitochondrial complexes I–IV. Oxidative stress analysis A mitochondrial isolation kit (Beyotime, C3606, China) was used for mitochondrial purification. Relative mitochondrial reactive oxygen species (ROS) in mouse brain tissues in each group were detected using a mitochondrial ROS assay kit (Jiancheng Institute, E004-1-1, China). Mouse serums (20 µl) from each mouse were harvested to measure the T-SOD content using a commercial assay kit (Jiancheng Institute, A001-1-2, China) according to the manufacturer’s instructions. A TBA method malondialdehyde (MDA) kit (Jiancheng Institute, A003-1-2, China) was adopted to detect the MDA content in mouse serum (10 µl). Quantitative real-time PCR analysis Total RNA was extracted from mouse brain tissue using a total RNA kit (Accurate Biology, AC21102, China), which was then reverse-transcribed into cDNA using a cDNA synthesis kit (Accurate Biology, AG11706, China). qPCR analysis was performed using the Bio-Rad CFX96 system to calculate relative mRNA levels using the 2-ΔΔCt method after normalisation with a housekeeping control, β-actin . The primer sequences used are listed in Table . Western blot (WB) analysis The primary antibodies used in the WB analysis were as follows: GSK3β (10D6) rabbit monoclonal antibody (1:1000, RM4035, Biodragon, China), phospho-GSK3β (Ser9) antibody (1:1000, TA2016S, Abmart, China), PGC1α rabbit pAb antibody (1:2000, A12348, ABclonal, China), brain-derived neurotrophic factor (BDNF) mouse mAb antibody (1:6000, A12348, ABclonal, China), GLUT4 rabbit monoclonal antibody (1:1000, BD-PT5523, Biodragon, China), anti-glucose transporter GLUT1 antibody (1:1000, ab115730, Abcam, the United Kingdom), β-Actin rabbit mAb antibody (1:50000, AC038, ABclonal, China), β-Actin antibody (1:2000, T0022, Affinity, China). Secondary antibodies against goat anti-rabbit IgG-horseradish peroxidase (1:2000, S0001) and goat anti-mouse IgG-horseradish peroxidase (1:2000, S0002) were obtained from Affinity (Shanghai, China). Immunoblots were visualised using an ECL WB detection kit (Biosharp, BL520B, China) and quantified using the High Sensitivity Chemiluminescent Imaging System (Bio-Rad, Hercules, CHEMIDOCXRS, USA). Densitometry analysis was performed using the Image Lab software (Bio-Rad Laboratories). Thioflavin S (TS) staining TS staining allows the visualisation of amyloid (Aβ) plaques. Brain tissue sections were de-paraffinised and washed for 5 min/3 times. A circle was drawn and stained with DAPI for 8 min. TS was prepared with 50% alcohol at a concentration of 0.3%, incubated at room temperature for 8 min, washed with 80% alcohol for 10 s, stained twice drop-wise for 10 s, rinsed once with pure water, sealed, and observed under a fluorescence microscope. The DSS was composed of Danggui (9 g), Shaoyao (48 g), Zexie (24 g), Chuanxiong (24 g), Baizhu (12 g), and Fuling (12 g) and was purchased from the Guangdong Provincial Hospital of Chinese Medicine. The low, medium, and high doses of DSS groups were equivalent to 1.6 g/kg, 3.2 g/kg, and 6.4 g/kg of raw material, respectively. Male amyloid precursor protein/presenilin-1 (APP/PS1) mice (3 months old, 25 ± 5 g) were supplied by Jiangsu Lingfei Biotechnology Co. Previous studies have shown that amyloid plaques in APP/PS1 mice begin to emerge in the cortex at approximately 4 months of age and in the hippocampus at approximately 6 months, and increase in size and number with age . All in vivo interventions were approved by the Animal Ethics Committee of Guangzhou University of Chinese Medicine (No. 20230405004). All mice were fed ad libitum food and water and housed under a 12 h light/dark (L/D) cycle, a constant room temperature (24 ± 2 °C) and relative air moisture (55% ± 15%). APP/PS1 mice were randomly divided into six groups ( n = 12) after management with ear tags: (1) APP/PS1, model group; (2) APP/PS1 + DSS (6.4 g/kg), DH group; (3) APP/PS1 + DSS (3.2 g/kg), DM group; (4) APP/PS1 + DSS (1.6 g/kg), DL group; (5) APP/PS1 + donepezil (2 mg/kg), positive control group; and (6) APP/PS1 + Li 2 CO 3 (300 mg/kg), inhibitor group. DSS was administered to mice for 75 consecutive days by gavage at 0.1 mL/10 g of dosing volume after adaptive feeding for 7 days. Wild-type (WT) littermates received the same volume of ddH 2 O as control mice. During the experiment, we strictly followed the test procedure and performed behavioural test without excessive human intervention. To reduce bias as much as possible, the entire behavioural experiment was conducted by three different experimenters, and all results were handled using rigorous statistical methods. Y-maze The Y-maze consisted of three arms, and each arm was positioned at a 120° angle, measuring 30 cm × 8 cm × 15 cm (L × W × H). Different geometric shapes were attached to each arm of the maze as visual markers. Each mouse was placed on the same arm (i.e., the starting arm) and subjected to continuous autonomous alternation for 5 min to determine the rate of autonomous alternation. A 75% alcohol solution was used to eliminate odours in the Y-maze arm before the next experiment was performed. During the process, the following indicators were recorded: (1) the total number of entries, the number of times the animals entered the arms of the maze, and (2) alternation, the sequential and consecutive entry into all three arms of the Y-maze. The percentage of alternations was calculated as follows: [total number of alternations / (total number of arm entries − 2)]×100. Morris water maze On day 0, the mice were acclimatised, whereas on days 1−5, the time between placing the mice in the pool and finding the platform (escape latency), as well as the mice’s swimming trajectory, were recorded. If the mice could not find the platform within 60 s, they were guided to the platform and left for 30 s to learn and remember the location. Manipulation was continued for 5 days, and the platform was removed on day 6. The locations of mice were recorded, and they were positioned in the water facing the pool wall and allowed to swim freely for 60 s before being rescued. The number of times the mice traversed the platform while swimming and the time spent in the platform quadrant were recorded. The Y-maze consisted of three arms, and each arm was positioned at a 120° angle, measuring 30 cm × 8 cm × 15 cm (L × W × H). Different geometric shapes were attached to each arm of the maze as visual markers. Each mouse was placed on the same arm (i.e., the starting arm) and subjected to continuous autonomous alternation for 5 min to determine the rate of autonomous alternation. A 75% alcohol solution was used to eliminate odours in the Y-maze arm before the next experiment was performed. During the process, the following indicators were recorded: (1) the total number of entries, the number of times the animals entered the arms of the maze, and (2) alternation, the sequential and consecutive entry into all three arms of the Y-maze. The percentage of alternations was calculated as follows: [total number of alternations / (total number of arm entries − 2)]×100. On day 0, the mice were acclimatised, whereas on days 1−5, the time between placing the mice in the pool and finding the platform (escape latency), as well as the mice’s swimming trajectory, were recorded. If the mice could not find the platform within 60 s, they were guided to the platform and left for 30 s to learn and remember the location. Manipulation was continued for 5 days, and the platform was removed on day 6. The locations of mice were recorded, and they were positioned in the water facing the pool wall and allowed to swim freely for 60 s before being rescued. The number of times the mice traversed the platform while swimming and the time spent in the platform quadrant were recorded. Haematoxylin and eosin (H&E) staining Mouse brain tissues were gradient dehydrated, made transparent, embedded in paraffin, sectioned, dewaxed, and stained with H&E. The stained sections were dehydrated, made transparent, sealed, and observed under a microscope. Nissl staining Mouse brain tissue was subjected to gradient dehydration, rinsed three times with distilled water for 5 min each, placed in a 60 °C temperature chamber, and stained with 1% toluidine blue for 40 min. The dye was washed with distilled water and dehydrated using 70%, 80%, 95%, and 100% ethanol. The brain tissues were then made transparent with xylene and the slices were sealed with neutral gum. Finally, pathological changes in the hippocampal tissues were observed under a microscope. Mouse brain tissues were gradient dehydrated, made transparent, embedded in paraffin, sectioned, dewaxed, and stained with H&E. The stained sections were dehydrated, made transparent, sealed, and observed under a microscope. Mouse brain tissue was subjected to gradient dehydration, rinsed three times with distilled water for 5 min each, placed in a 60 °C temperature chamber, and stained with 1% toluidine blue for 40 min. The dye was washed with distilled water and dehydrated using 70%, 80%, 95%, and 100% ethanol. The brain tissues were then made transparent with xylene and the slices were sealed with neutral gum. Finally, pathological changes in the hippocampal tissues were observed under a microscope. The mitochondrial membrane potential (MMP) in purified mitochondria derived from mouse brain tissue was detected with a JC-1 kit (Beyotime, C2006, China), which examined the red/green fluorescence intensity using a fluorescence microplate reader. Fresh mouse brain tissue (20 mg) was harvested to determine the ATP content using a commercial assay kit (Beyotime, S0027, China). Mouse serum (20 µl) obtained from each mouse was harvested to measure NADH content using an NADH assay kit (Meimian, MM445162M1, China) according to the manufacturer’s instructions. Mitochondrial complex I (JL49024), complex II (JL49033), complex III (JL47673), and complex IV (JL18155) kits (Jianglai, China) were used to detect the contents of mitochondrial complexes I–IV. A mitochondrial isolation kit (Beyotime, C3606, China) was used for mitochondrial purification. Relative mitochondrial reactive oxygen species (ROS) in mouse brain tissues in each group were detected using a mitochondrial ROS assay kit (Jiancheng Institute, E004-1-1, China). Mouse serums (20 µl) from each mouse were harvested to measure the T-SOD content using a commercial assay kit (Jiancheng Institute, A001-1-2, China) according to the manufacturer’s instructions. A TBA method malondialdehyde (MDA) kit (Jiancheng Institute, A003-1-2, China) was adopted to detect the MDA content in mouse serum (10 µl). Total RNA was extracted from mouse brain tissue using a total RNA kit (Accurate Biology, AC21102, China), which was then reverse-transcribed into cDNA using a cDNA synthesis kit (Accurate Biology, AG11706, China). qPCR analysis was performed using the Bio-Rad CFX96 system to calculate relative mRNA levels using the 2-ΔΔCt method after normalisation with a housekeeping control, β-actin . The primer sequences used are listed in Table . The primary antibodies used in the WB analysis were as follows: GSK3β (10D6) rabbit monoclonal antibody (1:1000, RM4035, Biodragon, China), phospho-GSK3β (Ser9) antibody (1:1000, TA2016S, Abmart, China), PGC1α rabbit pAb antibody (1:2000, A12348, ABclonal, China), brain-derived neurotrophic factor (BDNF) mouse mAb antibody (1:6000, A12348, ABclonal, China), GLUT4 rabbit monoclonal antibody (1:1000, BD-PT5523, Biodragon, China), anti-glucose transporter GLUT1 antibody (1:1000, ab115730, Abcam, the United Kingdom), β-Actin rabbit mAb antibody (1:50000, AC038, ABclonal, China), β-Actin antibody (1:2000, T0022, Affinity, China). Secondary antibodies against goat anti-rabbit IgG-horseradish peroxidase (1:2000, S0001) and goat anti-mouse IgG-horseradish peroxidase (1:2000, S0002) were obtained from Affinity (Shanghai, China). Immunoblots were visualised using an ECL WB detection kit (Biosharp, BL520B, China) and quantified using the High Sensitivity Chemiluminescent Imaging System (Bio-Rad, Hercules, CHEMIDOCXRS, USA). Densitometry analysis was performed using the Image Lab software (Bio-Rad Laboratories). TS staining allows the visualisation of amyloid (Aβ) plaques. Brain tissue sections were de-paraffinised and washed for 5 min/3 times. A circle was drawn and stained with DAPI for 8 min. TS was prepared with 50% alcohol at a concentration of 0.3%, incubated at room temperature for 8 min, washed with 80% alcohol for 10 s, stained twice drop-wise for 10 s, rinsed once with pure water, sealed, and observed under a fluorescence microscope. Sample preparation Hippocampal tissue samples ( n = 4) were ground with liquid nitrogen in cell powder. Four volumes of lysis buffer were added to the cell powder followed by sonication on ice for 3 min using a high-intensity ultrasonic processor (Scientz). After centrifugation at 12,000 × g at 4 °C for 10 min, the supernatant was collected, and the protein concentration was determined using the BCA kit according to the manufacturer’s instructions. Proteomics data analysis Proteomic experiments were conducted by Jingjie PTM BioLab Ltd. (Hangzhou, China). The original data obtained from LC-MS/MS were imported into the Proteome Discoverer software. The standard fold change (FC) > 1.2 or < 0.83 and p < 0.05 were used to screen differentially expressed proteins (DEPs). All DEPs were annotated using Blast2GO for Gene Ontology (GO) function and then analysed using Fisher’s exact test for GO functional enrichment and bioinformatics. KEGG pathway enrichment analysis of significantly DEPs was also performed to decipher the metabolic or signalling pathways in which these DEPs might participate. Hippocampal tissue samples ( n = 4) were ground with liquid nitrogen in cell powder. Four volumes of lysis buffer were added to the cell powder followed by sonication on ice for 3 min using a high-intensity ultrasonic processor (Scientz). After centrifugation at 12,000 × g at 4 °C for 10 min, the supernatant was collected, and the protein concentration was determined using the BCA kit according to the manufacturer’s instructions. Proteomic experiments were conducted by Jingjie PTM BioLab Ltd. (Hangzhou, China). The original data obtained from LC-MS/MS were imported into the Proteome Discoverer software. The standard fold change (FC) > 1.2 or < 0.83 and p < 0.05 were used to screen differentially expressed proteins (DEPs). All DEPs were annotated using Blast2GO for Gene Ontology (GO) function and then analysed using Fisher’s exact test for GO functional enrichment and bioinformatics. KEGG pathway enrichment analysis of significantly DEPs was also performed to decipher the metabolic or signalling pathways in which these DEPs might participate. Sample preparation and extraction Mouse sera ( n = 6) from each group were obtained after the blood samples were centrifuged at 3000 rpm at 4 °C for 10 min and subsequently stored at -80 °C for targeted metabolomic analysis. Blood serum samples were thawed and vortexed for 10 s; 250 µL of 20% acetonitrile/methanol was added to the sample, vortexed for 3 min, and centrifuged at 12,000 r/min for 10 min at 4 °C. After centrifugation, 200 µL of the supernatant was collected using a protein precipitation plate for further LC-MS analysis. Targeted metabolomic analysis was performed by the Wuhan Metware Biotechnology Co. Ltd. Data processing analysis Data analyses, including unsupervised principal component analysis (PCA) and orthogonal projections to latent structure discriminant analysis (OPLS-DA), were performed. PCA was applied to determine whether abnormal data were present, whereas PLS-DA was performed to investigate differences between the groups. Two hundred permutation tests were performed in which the intercepts of R 2 and Q 2 were used to confirm whether there was overfitting of the data to estimate the reliability and stability of the established model. Metabolites with fold change (FC) ≥ 1.5 or FC < 0.67 or VIP score > 1 and p values ( p ≤ 0.05) were considered significantly different. Mouse sera ( n = 6) from each group were obtained after the blood samples were centrifuged at 3000 rpm at 4 °C for 10 min and subsequently stored at -80 °C for targeted metabolomic analysis. Blood serum samples were thawed and vortexed for 10 s; 250 µL of 20% acetonitrile/methanol was added to the sample, vortexed for 3 min, and centrifuged at 12,000 r/min for 10 min at 4 °C. After centrifugation, 200 µL of the supernatant was collected using a protein precipitation plate for further LC-MS analysis. Targeted metabolomic analysis was performed by the Wuhan Metware Biotechnology Co. Ltd. Data analyses, including unsupervised principal component analysis (PCA) and orthogonal projections to latent structure discriminant analysis (OPLS-DA), were performed. PCA was applied to determine whether abnormal data were present, whereas PLS-DA was performed to investigate differences between the groups. Two hundred permutation tests were performed in which the intercepts of R 2 and Q 2 were used to confirm whether there was overfitting of the data to estimate the reliability and stability of the established model. Metabolites with fold change (FC) ≥ 1.5 or FC < 0.67 or VIP score > 1 and p values ( p ≤ 0.05) were considered significantly different. The D-T and metabolite-gene networks were constructed and analysed using Gephi (v0.9.2, https://gephi.org/ ) and Cytoscape (v3.2.1, http://www.cytoscape.org/ ). All experimental values, expressed as mean ± standard error of the mean (SEM), were analysed using the Statistical Package for the SPSS version 26.0. If they met the normal distribution, one-way analysis of variance (ANOVA) was performed, and the LSD method was applied for multiple comparisons of the means of each group if the variance was satisfied. Tamhane’s T2 method was used when variances were not aligned. If it did not meet the normal distribution, the values were subjected to non-parametric independent samples Kruskal–Wallis test. Differences were considered statistically significant at P < 0.05. DSS rescued cognitive impairment in APP/PS1 mice The Y-maze test demonstrated that the alternation rate of APP/PS1 mice (model group) was lower than that of WT mice (control group, p < 0.01), and was significantly improved after DSS administration ( p < 0.01, Fig. A-C). We found that the three different doses of DSS revealed equivalent and even better effects than the donepezil group (positive control group, p < 0.01, Fig. B). The Morris water maze test was performed to examine the cognitive capabilities of the mice. During the training days, the model group showed a longer escape latency than the control group, and this trend was markedly reversed by treatment with low or high doses of DSS ( p < 0.01; Fig. D, E). In the probe trial, mice in the model group had the lowest number of crossings. In contrast, the DSS group revealed more crossings and longer stay times in the target quadrant than the APP/PS1 mice (model group) ( p < 0.01, Fig. F, G). Furthermore, mice in the DSS group had shorter escape latency than those in the model group ( p < 0.01, Fig. H). Taken together, these behavioural tests suggest that DSS restores the cognitive decline in APP/PS1 mice. Effect of DSS on neuronal survival and Aβ plaque Multiple studies have confirmed the vital role of neuronal apoptosis in AD pathology. Therefore, we investigated whether DSS could attenuate neuronal apoptosis in the hippocampus of AD mice. Representative hippocampal microphotographs of Nissl and H&E staining revealed that neuronal cells in the CA3 and DG areas of the hippocampus in the control group were uniformly distributed with clear edges and were neatly arranged (Fig. A, B ) . In contrast, the neuronal cells in the model group were damaged and loosely arranged, with abnormal cell morphology and wrinkled cytoplasm. After treatment with DSS, Li 2 CO 3, or donepezil, the number of damaged neuronal cells in the hippocampus of AD mice was substantially reduced, and a remarkable improvement in the number, morphology, and structure of the neuronal cells was observed (Fig. A, B). Since Aβ plaque is an important pathological hallmark of AD, we next performed TS staining to examine whether DSS treatment affected Aβ deposition in AD mice. TS staining results showed that TS-positive Aβ plaques in brain regions, including the cortex and hippocampus, decreased significantly with the administration of DSS ( p < 0.01) and Li 2 CO 3 ( p < 0.01) compared to those in the model group (Fig. C, D), suggesting that DSS could reduce the Aβ burden in the brains of APP/PS1 mice. D-T network analysis and screening process of the core ingredients in DSS We first developed a D-T network of herbal ingredients in DSS, which consisted of 6,514 DTIs that interacted with 1,118 ingredients and 218 AD genes (Fig. A). Multiple herbal ingredients were connected with numerous AD genes in this network, with an average ingredient degree of 5.1 for each gene. The top 10 ingredients with the highest degree numbers were as follows: augustic acid (CID15560128, D = 141), quercetin (CID5280343, D = 69), capsaicin (CID1548943, D = 63), apigenin (CID5280443, D = 44), luteolin (CID5280445, D = 44), gallic acid (CID370, D = 39), oleanolic acid (CID10494, D = 30), caffeic acid (CID689043, D = 29), kaempferol (CID5280863, D = 29), and niacinamide (CID936, D = 27). Emerging evidence indicates that these ingredients may have potent therapeutic effects against AD. Network analysis indicated that capsaicin interacts with 63 AD-related genes, suggesting its potential against AD. Indeed, an in vivo study showed that a diet high in capsaicin reduced the incidence and development of AD by altering the gut microbiome and serum metabolome and that capsaicin could reduce neurodegeneration, neuroinflammation, and deterioration in spatial memory in the AD model . Apigenin has also been reported to reduce AD symptoms in transgenic Drosophila models of AD . Moreover, a recent study revealed that luteolin effectively improved cognitive deficits in 3 × Tg-AD mice and inhibited Aβ-induced oxidative stress, mitochondrial dysfunction, and neuronal apoptosis via a PPARγ-dependent mechanism, which may serve as a therapeutic agent for AD . In summary, ingredients with a high degree number in the D-T network are likely to have good therapeutic potential against AD. Network analysis indicated that 28 of the 218 AD genes had degree numbers greater than or equal to 30 (Fig. A). Among them, MAPT (K = 906) had the highest number of ingredient connections, followed by PTPN1 (K = 728), and AChE (K = 550). A growing body of literature has confirmed their vital role in AD. For instance, the miR-124/PTPN1 pathway has been identified as a critical mediator of synaptic dysfunction and memory loss and could be regarded as a promising novel therapeutic target for patients with AD . Moreover, in vivo research revealed that correcting abnormalities in miR-124/PTPN1 signalling rescued tau pathology , also highlighting its importance in AD. Furthermore, multiple studies have confirmed that AChE inhibitors can reduce Aβ levels in AD both in silico and in vivo . To further identify the core ingredients in DSS, we integrated the ingredients of each herb in DSS and evaluated the ADMET properties (Fig. B). The Sankey diagram shows that Danggui had the highest number of herbal ingredients ( n = 616), followed by Chuanxiong ( n = 461), and Baizhu ( n = 199). Herbal ingredients can be classified into eight categories: benzenoids, organoheterocyclic compounds, organic acids and derivatives, phenylpropanoids and polyketides, organic oxygen compounds, hydrocarbons, lipids, and lipid-like molecules. The top three categories were lipids and lipid-like molecules (452/1118 = 40.4%), phenylpropanoids and polyketides (132/1118 = 11.8%), and benzenoids (127/1118 = 11.4%), indicating that these herbal ingredients may have great potential against AD. Moreover, the ADMET results suggested that 920 herbal ingredients had good HIA and 590 of them were predicted to cross the BBB. After PPB screening, 253 herbal ingredients were identified that could be the core ingredients of DSS (Table ). Interestingly, we found that multiple core herbal ingredients were also consistent with ingredients with high degrees in the D-T network (Fig. A). 4D-FastDIA quantitative proteomics analysis On the basis of these in vivo pharmacodynamic results, we selected a high dose of DSS (DH) for subsequent proteomic and metabolomic experiments to investigate the MOA of DSS against AD. To determine the quantitative repeatability of the data, we calculated the relative standard deviation (RSD) values of the control, model, and DH groups based on the relative quantitative values of the repeated samples in each group. RSD values of the three groups were < 0.2, indicating that the data were stable and reliable, with good quantitative repeatability (Fig. A). Next, we integrated the protein strength values of each sample and found that their distributions were comparatively centralised and that the sample means were at the same level, suggesting good quality of the samples (Fig. B). Furthermore, we characterised the protein alterations resulting from the control, model, and DH treatment groups by performing 4D-FastDIA quantitative proteomic analysis of hippocampal tissue derived from APP/PS1 mice. In total, 111 DEPs were identified, 90 of which were significantly upregulated, and 22 were downregulated in the control and model groups (Fig. C). Meanwhile, 69 DEPs were also identified, including 43 upregulated and 26 downregulated DEPs, between the model and DH groups, indicating that hippocampal tissue samples in the DH group showed significant changes in protein levels compared to those in the model group. We also performed an overlap analysis of the DEPs between the control group vs. the model group and the model group vs. the DH group and found that 10 DEPs were simultaneously regulated by three groups: Wdfy1, Slc6a20b, Mt-Cyb, Omp, Mef2c, Sphk1, Rpl14, Pcbp3, Clic6, and Scgn (Fig. D). Furthermore, we developed the core ingredient-AD gene network and protein-protein interaction (PPI) network of DEPs to investigate their relationships. Network analysis indicated that these core components interacted with 118 AD genes and 36 DEPs connected to 125 protein-protein interactions (PPIs). Remarkably, we found that two genes ( ACHE and CD44 ) were regulated by the core ingredients of DSS and DEPs, suggesting that the core ingredients of DSS could act on these DEPs to exert anti-AD effect (Fig. E). To explore the potential MOAs in which the DEPs of the three different groups may be involved, we also performed a GO enrichment analysis of DEPs, including the biological process (BP), cellular component (CC), and KEGG pathway. These DEPs participate in multiple BPs, including mitochondrial ATP synthesis-coupled electron transport, ATP synthesis-coupled electron transport, and oxidative phosphorylation (Fig. A). Emerging literature has shown that these BPs are highly related to EM . Furthermore, the CC annotations suggested that these DEPs were also associated with EM, such as the inner mitochondrial membrane protein complex and the mitochondrial respirasome (Fig. A). KEGG pathway analysis indicated that these DEPs could participate in EM (Fig. B). Taken together, the preliminary proteomic analysis demonstrated that DSS might relieve AD by modulating the EM. Targeted energy metabolic profiling As proteomic analysis indicated that DSS might act on EM to exert a therapeutic effect on AD, we next performed a targeted energy metabolic profile to determine the energy metabolites regulated by DSS. The PCA scoring plot suggested that the model group was clearly separated from the control group (Fig. A), indicating a significant change in the type or level of the metabolites. Moreover, the model group was separated from the DH group (Fig. C) after treatment with DSS, implying that metabolic levels in the model group could be regulated by DSS. The reliability of the OPLS-DA classification model was confirmed using 200 permutation tests, and the results for the control and model groups were R 2 X = 0.764, R 2 Y = 0.983, and Q 2 = 0.728 (Fig. B). The results between the model and DSS groups were R 2 X = 0.484, R 2 Y = 0.889, and Q 2 = 0.687 (Fig. D), indicating that there was no overfitting of the OPLS-DA model, which had good predictive power. Targeted energy metabolic profiling of the three groups identified and quantified 47 energy metabolites in 18 samples (Fig. E). Next, we integrated some important metabolites in the EM and determined the differences in the levels of these metabolites in the three groups (Fig. F). Moreover, eight significant metabolite biomarkers, including 3-phenyllactic-acid, L-alanine, L-cysteine, serine, uracil, argininosuccinic acid, citric acid, and sedoheptulose-7-phosphate, were associated with DH treatment after applying the screening criteria ( see Materials and Methods 2.6.2 , Fig. G, H). We further analysed the relative peak area changes of eight significant metabolite biomarkers and found that three metabolites (serine, L-alanine, and 3-phenyllactic-acid) were downregulated ( p < 0.05) in the control group vs. the model group, whereas their levels were upregulated after treatment with DSS ( p < 0.05, DH group vs. the model group, Fig. I). Metabolite-AD gene network of DSS Because changes in gene expression might directly influence metabolite production and consumption, which mutually affect disease progression, we next explored the relationship between differential serum metabolites and AD genes. AlzGPS is a genome-wide positioning system platform that catalyzes multi-omics for Alzheimer’s drug discovery . First, we integrated the metabolite-associated genes from the AlzGPS database and performed an overlap analysis between the metabolite-associated genes and AD disease genes to highlight metabolite-associated AD genes. As shown in Fig. , the differential metabolite-AD gene network consisted of 30 differential metabolites and 90 AD genes. Among them, L-aspartate (degree = 15) had the highest number of gene connections, followed by succinic acid (degree = 14), and arginine (degree = 12). SLC16A10 interacts with seven metabolites, whereas ASS1 interacts with five metabolites. We found that 14 AD genes, including HPRT1 , ABCA1 , TPI1 , MDH1 , OXCT1 , NOS1 , NOS3 , GAD1 , GAD2 , ENO1 , GPI , EGFR , CBS , and GOT1 , had close metabolite connections that were simultaneously regulated by DSS, suggesting that DSS might act on these AD genes to regulate the related differential metabolites. DSS promoted the brain glucose uptake Brain glucose uptake has been demonstrated to play a vital role in cellular energy supply, and reduced GLUT levels in patients with AD could impair glucose availability, which may accelerate neuronal death and ultimately lead to brain dysfunction and memory loss . In this section, we aimed to assess the brain glucose uptake capacity by measuring the gene and protein expression of GLUT1 and GLUT4 using qPCR and WB analysis. qPCR results showed that the mRNA levels of GLUT1 and GLUT4 in the cortex were significantly decreased in the model group compared to those in the control group, whereas their gene expression levels were remarkably upregulated after DSS ( p < 0.01), Li 2 CO 3 ( p < 0.05), and donepezil ( p < 0.01) administration in APP/PS1 mice (Fig. A, B). WB results showed that the protein expression of GLUT1 and GLUT4 was upregulated in the cortex of APP/PS1 mice after the administration of DSS, Li 2 CO 3 , and donepezil (Fig. D, E). In the hippocampus, a significant decrease in the protein expression levels of GLUT1 and GLUT4 was observed in the model group ( p < 0.05, p < 0.01) compared to the control group, while their levels were also upregulated after treatment with DSS ( p < 0.05, p < 0.01), Li 2 CO 3 ( p < 0.01), and donepezil ( p < 0.05, Fig. F-G). Brain-derived neurotrophic factor ( BDNF ) is reported to be associated with Aβ accumulation, tau phosphorylation, neuroinflammation, and neuronal apoptosis, which may play a potential role in the pathogenesis of AD . Therefore, we used qPCR analysis to assess the mRNA expression level of BDNF in the cortex, and WB analysis to evaluate the protein expression levels of BDNF in the cortex and hippocampus. The qPCR results showed that the mRNA expression level of BDNF was significantly decreased in the model group ( p < 0.01), whereas DSS ( p < 0.05), Li 2 CO 3 ( p < 0.01), and donepezil ( p < 0.05) treatments remarkably upregulated BDNF gene expression in AD mice (Fig. C). Meanwhile, WB results suggested that there was no statistically significant difference in the protein expression level of BDNF in the cortex (Fig. D, E). However, BDNF expression in the hippocampus of mice that received DSS ( p < 0.05), Li 2 CO 3 ( p < 0.05), and donepezil ( p < 0.05) exhibited a remarkable increase compared with that in the model group (Fig. F, G). DSS improved mitochondrial function and relieved oxidative stress As mitochondrion plays a crucial role in cellular EM, we first confirmed whether DSS could improve mitochondrial function. MMP, an important indicator of cellular EM, was detected, and a significant increase in MMP was observed after administration of DSS ( p < 0.05) and Li 2 CO 3 ( p < 0.05) compared to that in the model group (Fig. A). Since the functional and structural integrity of mitochondria is also reflected in ATP and NADH levels , we next measured their content using commercial kits. ATP and NADH levels were markedly upregulated in APP/PS1 mice treated with DSS ( p < 0.01) and Li 2 CO 3 ( p < 0.01) (Fig. B, C). Given that the mitochondrial respiratory chain is an important component of cellular EM, we investigated the effect of DSS on the levels of complexes I-IV. Consistent with the protective effect of DSS on mitochondrial membrane integrity, DSS significantly increased the levels of complexes I ( p < 0.01), II ( p < 0.01), III ( p < 0.01), and IV ( p < 0.05, p < 0.01) (Fig. D-G). In summary, DSS can promote EM and improve mitochondrial function in APP/PS1 mice. Mitochondria are the main sources and generators of intracellular ROS. Overproduction of ROS results in oxidative stress and mitochondrial dysfunction . To explore the effect of DSS on oxidative stress, ROS levels in mouse brain tissue and T-SOD and MDA levels in mouse serum were measured. The model group showed excessive ROS production compared to the control group ( p < 0.01), and DSS treatment ( p < 0.01) eliminated ROS overload in the brains of AD mice (Fig. H). T-SOD levels decreased significantly in the model group and were markedly upregulated by DSS treatment (Fig. I). Meanwhile, elevated MDA levels in the model group were significantly reduced after DSS treatment ( p < 0.01) (Fig. J). DSS regulated GSK3β/PGC1α signalling pathway Since GSK3β was shown to be a regulator of EM in the brain , and also played an important role in the D-T network (Fig. A), we next determined whether DSS could alleviate cognitive deficit through the GSK3β/PGC1α signalling pathway. The qPCR results showed that the mRNA expression level of GSK3β in the model group increased ( p < 0.05), while its level was significantly downregulated after DSS treatment ( p < 0.01) (Fig. A). Regarding the PGC1α, the reduced mRNA expression level of PGC1α in the model group ( p < 0.01) was markedly reversed by treatment with DSS ( p < 0.05, p < 0.01) and Li 2 CO 3 ( p < 0.01) (Fig. B). The WB results suggested that, in the hippocampus and cortex, the protein expression levels of p-GSK3β and PGC1α were decreased in the model group, while the increased expression levels could be detected in APP/PS1 mice after administration of DSS ( p < 0.05, p < 0.01) and Li 2 CO 3 ( p < 0.01) (Fig. C-F). The Y-maze test demonstrated that the alternation rate of APP/PS1 mice (model group) was lower than that of WT mice (control group, p < 0.01), and was significantly improved after DSS administration ( p < 0.01, Fig. A-C). We found that the three different doses of DSS revealed equivalent and even better effects than the donepezil group (positive control group, p < 0.01, Fig. B). The Morris water maze test was performed to examine the cognitive capabilities of the mice. During the training days, the model group showed a longer escape latency than the control group, and this trend was markedly reversed by treatment with low or high doses of DSS ( p < 0.01; Fig. D, E). In the probe trial, mice in the model group had the lowest number of crossings. In contrast, the DSS group revealed more crossings and longer stay times in the target quadrant than the APP/PS1 mice (model group) ( p < 0.01, Fig. F, G). Furthermore, mice in the DSS group had shorter escape latency than those in the model group ( p < 0.01, Fig. H). Taken together, these behavioural tests suggest that DSS restores the cognitive decline in APP/PS1 mice. Multiple studies have confirmed the vital role of neuronal apoptosis in AD pathology. Therefore, we investigated whether DSS could attenuate neuronal apoptosis in the hippocampus of AD mice. Representative hippocampal microphotographs of Nissl and H&E staining revealed that neuronal cells in the CA3 and DG areas of the hippocampus in the control group were uniformly distributed with clear edges and were neatly arranged (Fig. A, B ) . In contrast, the neuronal cells in the model group were damaged and loosely arranged, with abnormal cell morphology and wrinkled cytoplasm. After treatment with DSS, Li 2 CO 3, or donepezil, the number of damaged neuronal cells in the hippocampus of AD mice was substantially reduced, and a remarkable improvement in the number, morphology, and structure of the neuronal cells was observed (Fig. A, B). Since Aβ plaque is an important pathological hallmark of AD, we next performed TS staining to examine whether DSS treatment affected Aβ deposition in AD mice. TS staining results showed that TS-positive Aβ plaques in brain regions, including the cortex and hippocampus, decreased significantly with the administration of DSS ( p < 0.01) and Li 2 CO 3 ( p < 0.01) compared to those in the model group (Fig. C, D), suggesting that DSS could reduce the Aβ burden in the brains of APP/PS1 mice. We first developed a D-T network of herbal ingredients in DSS, which consisted of 6,514 DTIs that interacted with 1,118 ingredients and 218 AD genes (Fig. A). Multiple herbal ingredients were connected with numerous AD genes in this network, with an average ingredient degree of 5.1 for each gene. The top 10 ingredients with the highest degree numbers were as follows: augustic acid (CID15560128, D = 141), quercetin (CID5280343, D = 69), capsaicin (CID1548943, D = 63), apigenin (CID5280443, D = 44), luteolin (CID5280445, D = 44), gallic acid (CID370, D = 39), oleanolic acid (CID10494, D = 30), caffeic acid (CID689043, D = 29), kaempferol (CID5280863, D = 29), and niacinamide (CID936, D = 27). Emerging evidence indicates that these ingredients may have potent therapeutic effects against AD. Network analysis indicated that capsaicin interacts with 63 AD-related genes, suggesting its potential against AD. Indeed, an in vivo study showed that a diet high in capsaicin reduced the incidence and development of AD by altering the gut microbiome and serum metabolome and that capsaicin could reduce neurodegeneration, neuroinflammation, and deterioration in spatial memory in the AD model . Apigenin has also been reported to reduce AD symptoms in transgenic Drosophila models of AD . Moreover, a recent study revealed that luteolin effectively improved cognitive deficits in 3 × Tg-AD mice and inhibited Aβ-induced oxidative stress, mitochondrial dysfunction, and neuronal apoptosis via a PPARγ-dependent mechanism, which may serve as a therapeutic agent for AD . In summary, ingredients with a high degree number in the D-T network are likely to have good therapeutic potential against AD. Network analysis indicated that 28 of the 218 AD genes had degree numbers greater than or equal to 30 (Fig. A). Among them, MAPT (K = 906) had the highest number of ingredient connections, followed by PTPN1 (K = 728), and AChE (K = 550). A growing body of literature has confirmed their vital role in AD. For instance, the miR-124/PTPN1 pathway has been identified as a critical mediator of synaptic dysfunction and memory loss and could be regarded as a promising novel therapeutic target for patients with AD . Moreover, in vivo research revealed that correcting abnormalities in miR-124/PTPN1 signalling rescued tau pathology , also highlighting its importance in AD. Furthermore, multiple studies have confirmed that AChE inhibitors can reduce Aβ levels in AD both in silico and in vivo . To further identify the core ingredients in DSS, we integrated the ingredients of each herb in DSS and evaluated the ADMET properties (Fig. B). The Sankey diagram shows that Danggui had the highest number of herbal ingredients ( n = 616), followed by Chuanxiong ( n = 461), and Baizhu ( n = 199). Herbal ingredients can be classified into eight categories: benzenoids, organoheterocyclic compounds, organic acids and derivatives, phenylpropanoids and polyketides, organic oxygen compounds, hydrocarbons, lipids, and lipid-like molecules. The top three categories were lipids and lipid-like molecules (452/1118 = 40.4%), phenylpropanoids and polyketides (132/1118 = 11.8%), and benzenoids (127/1118 = 11.4%), indicating that these herbal ingredients may have great potential against AD. Moreover, the ADMET results suggested that 920 herbal ingredients had good HIA and 590 of them were predicted to cross the BBB. After PPB screening, 253 herbal ingredients were identified that could be the core ingredients of DSS (Table ). Interestingly, we found that multiple core herbal ingredients were also consistent with ingredients with high degrees in the D-T network (Fig. A). On the basis of these in vivo pharmacodynamic results, we selected a high dose of DSS (DH) for subsequent proteomic and metabolomic experiments to investigate the MOA of DSS against AD. To determine the quantitative repeatability of the data, we calculated the relative standard deviation (RSD) values of the control, model, and DH groups based on the relative quantitative values of the repeated samples in each group. RSD values of the three groups were < 0.2, indicating that the data were stable and reliable, with good quantitative repeatability (Fig. A). Next, we integrated the protein strength values of each sample and found that their distributions were comparatively centralised and that the sample means were at the same level, suggesting good quality of the samples (Fig. B). Furthermore, we characterised the protein alterations resulting from the control, model, and DH treatment groups by performing 4D-FastDIA quantitative proteomic analysis of hippocampal tissue derived from APP/PS1 mice. In total, 111 DEPs were identified, 90 of which were significantly upregulated, and 22 were downregulated in the control and model groups (Fig. C). Meanwhile, 69 DEPs were also identified, including 43 upregulated and 26 downregulated DEPs, between the model and DH groups, indicating that hippocampal tissue samples in the DH group showed significant changes in protein levels compared to those in the model group. We also performed an overlap analysis of the DEPs between the control group vs. the model group and the model group vs. the DH group and found that 10 DEPs were simultaneously regulated by three groups: Wdfy1, Slc6a20b, Mt-Cyb, Omp, Mef2c, Sphk1, Rpl14, Pcbp3, Clic6, and Scgn (Fig. D). Furthermore, we developed the core ingredient-AD gene network and protein-protein interaction (PPI) network of DEPs to investigate their relationships. Network analysis indicated that these core components interacted with 118 AD genes and 36 DEPs connected to 125 protein-protein interactions (PPIs). Remarkably, we found that two genes ( ACHE and CD44 ) were regulated by the core ingredients of DSS and DEPs, suggesting that the core ingredients of DSS could act on these DEPs to exert anti-AD effect (Fig. E). To explore the potential MOAs in which the DEPs of the three different groups may be involved, we also performed a GO enrichment analysis of DEPs, including the biological process (BP), cellular component (CC), and KEGG pathway. These DEPs participate in multiple BPs, including mitochondrial ATP synthesis-coupled electron transport, ATP synthesis-coupled electron transport, and oxidative phosphorylation (Fig. A). Emerging literature has shown that these BPs are highly related to EM . Furthermore, the CC annotations suggested that these DEPs were also associated with EM, such as the inner mitochondrial membrane protein complex and the mitochondrial respirasome (Fig. A). KEGG pathway analysis indicated that these DEPs could participate in EM (Fig. B). Taken together, the preliminary proteomic analysis demonstrated that DSS might relieve AD by modulating the EM. As proteomic analysis indicated that DSS might act on EM to exert a therapeutic effect on AD, we next performed a targeted energy metabolic profile to determine the energy metabolites regulated by DSS. The PCA scoring plot suggested that the model group was clearly separated from the control group (Fig. A), indicating a significant change in the type or level of the metabolites. Moreover, the model group was separated from the DH group (Fig. C) after treatment with DSS, implying that metabolic levels in the model group could be regulated by DSS. The reliability of the OPLS-DA classification model was confirmed using 200 permutation tests, and the results for the control and model groups were R 2 X = 0.764, R 2 Y = 0.983, and Q 2 = 0.728 (Fig. B). The results between the model and DSS groups were R 2 X = 0.484, R 2 Y = 0.889, and Q 2 = 0.687 (Fig. D), indicating that there was no overfitting of the OPLS-DA model, which had good predictive power. Targeted energy metabolic profiling of the three groups identified and quantified 47 energy metabolites in 18 samples (Fig. E). Next, we integrated some important metabolites in the EM and determined the differences in the levels of these metabolites in the three groups (Fig. F). Moreover, eight significant metabolite biomarkers, including 3-phenyllactic-acid, L-alanine, L-cysteine, serine, uracil, argininosuccinic acid, citric acid, and sedoheptulose-7-phosphate, were associated with DH treatment after applying the screening criteria ( see Materials and Methods 2.6.2 , Fig. G, H). We further analysed the relative peak area changes of eight significant metabolite biomarkers and found that three metabolites (serine, L-alanine, and 3-phenyllactic-acid) were downregulated ( p < 0.05) in the control group vs. the model group, whereas their levels were upregulated after treatment with DSS ( p < 0.05, DH group vs. the model group, Fig. I). Because changes in gene expression might directly influence metabolite production and consumption, which mutually affect disease progression, we next explored the relationship between differential serum metabolites and AD genes. AlzGPS is a genome-wide positioning system platform that catalyzes multi-omics for Alzheimer’s drug discovery . First, we integrated the metabolite-associated genes from the AlzGPS database and performed an overlap analysis between the metabolite-associated genes and AD disease genes to highlight metabolite-associated AD genes. As shown in Fig. , the differential metabolite-AD gene network consisted of 30 differential metabolites and 90 AD genes. Among them, L-aspartate (degree = 15) had the highest number of gene connections, followed by succinic acid (degree = 14), and arginine (degree = 12). SLC16A10 interacts with seven metabolites, whereas ASS1 interacts with five metabolites. We found that 14 AD genes, including HPRT1 , ABCA1 , TPI1 , MDH1 , OXCT1 , NOS1 , NOS3 , GAD1 , GAD2 , ENO1 , GPI , EGFR , CBS , and GOT1 , had close metabolite connections that were simultaneously regulated by DSS, suggesting that DSS might act on these AD genes to regulate the related differential metabolites. Brain glucose uptake has been demonstrated to play a vital role in cellular energy supply, and reduced GLUT levels in patients with AD could impair glucose availability, which may accelerate neuronal death and ultimately lead to brain dysfunction and memory loss . In this section, we aimed to assess the brain glucose uptake capacity by measuring the gene and protein expression of GLUT1 and GLUT4 using qPCR and WB analysis. qPCR results showed that the mRNA levels of GLUT1 and GLUT4 in the cortex were significantly decreased in the model group compared to those in the control group, whereas their gene expression levels were remarkably upregulated after DSS ( p < 0.01), Li 2 CO 3 ( p < 0.05), and donepezil ( p < 0.01) administration in APP/PS1 mice (Fig. A, B). WB results showed that the protein expression of GLUT1 and GLUT4 was upregulated in the cortex of APP/PS1 mice after the administration of DSS, Li 2 CO 3 , and donepezil (Fig. D, E). In the hippocampus, a significant decrease in the protein expression levels of GLUT1 and GLUT4 was observed in the model group ( p < 0.05, p < 0.01) compared to the control group, while their levels were also upregulated after treatment with DSS ( p < 0.05, p < 0.01), Li 2 CO 3 ( p < 0.01), and donepezil ( p < 0.05, Fig. F-G). Brain-derived neurotrophic factor ( BDNF ) is reported to be associated with Aβ accumulation, tau phosphorylation, neuroinflammation, and neuronal apoptosis, which may play a potential role in the pathogenesis of AD . Therefore, we used qPCR analysis to assess the mRNA expression level of BDNF in the cortex, and WB analysis to evaluate the protein expression levels of BDNF in the cortex and hippocampus. The qPCR results showed that the mRNA expression level of BDNF was significantly decreased in the model group ( p < 0.01), whereas DSS ( p < 0.05), Li 2 CO 3 ( p < 0.01), and donepezil ( p < 0.05) treatments remarkably upregulated BDNF gene expression in AD mice (Fig. C). Meanwhile, WB results suggested that there was no statistically significant difference in the protein expression level of BDNF in the cortex (Fig. D, E). However, BDNF expression in the hippocampus of mice that received DSS ( p < 0.05), Li 2 CO 3 ( p < 0.05), and donepezil ( p < 0.05) exhibited a remarkable increase compared with that in the model group (Fig. F, G). As mitochondrion plays a crucial role in cellular EM, we first confirmed whether DSS could improve mitochondrial function. MMP, an important indicator of cellular EM, was detected, and a significant increase in MMP was observed after administration of DSS ( p < 0.05) and Li 2 CO 3 ( p < 0.05) compared to that in the model group (Fig. A). Since the functional and structural integrity of mitochondria is also reflected in ATP and NADH levels , we next measured their content using commercial kits. ATP and NADH levels were markedly upregulated in APP/PS1 mice treated with DSS ( p < 0.01) and Li 2 CO 3 ( p < 0.01) (Fig. B, C). Given that the mitochondrial respiratory chain is an important component of cellular EM, we investigated the effect of DSS on the levels of complexes I-IV. Consistent with the protective effect of DSS on mitochondrial membrane integrity, DSS significantly increased the levels of complexes I ( p < 0.01), II ( p < 0.01), III ( p < 0.01), and IV ( p < 0.05, p < 0.01) (Fig. D-G). In summary, DSS can promote EM and improve mitochondrial function in APP/PS1 mice. Mitochondria are the main sources and generators of intracellular ROS. Overproduction of ROS results in oxidative stress and mitochondrial dysfunction . To explore the effect of DSS on oxidative stress, ROS levels in mouse brain tissue and T-SOD and MDA levels in mouse serum were measured. The model group showed excessive ROS production compared to the control group ( p < 0.01), and DSS treatment ( p < 0.01) eliminated ROS overload in the brains of AD mice (Fig. H). T-SOD levels decreased significantly in the model group and were markedly upregulated by DSS treatment (Fig. I). Meanwhile, elevated MDA levels in the model group were significantly reduced after DSS treatment ( p < 0.01) (Fig. J). Since GSK3β was shown to be a regulator of EM in the brain , and also played an important role in the D-T network (Fig. A), we next determined whether DSS could alleviate cognitive deficit through the GSK3β/PGC1α signalling pathway. The qPCR results showed that the mRNA expression level of GSK3β in the model group increased ( p < 0.05), while its level was significantly downregulated after DSS treatment ( p < 0.01) (Fig. A). Regarding the PGC1α, the reduced mRNA expression level of PGC1α in the model group ( p < 0.01) was markedly reversed by treatment with DSS ( p < 0.05, p < 0.01) and Li 2 CO 3 ( p < 0.01) (Fig. B). The WB results suggested that, in the hippocampus and cortex, the protein expression levels of p-GSK3β and PGC1α were decreased in the model group, while the increased expression levels could be detected in APP/PS1 mice after administration of DSS ( p < 0.05, p < 0.01) and Li 2 CO 3 ( p < 0.01) (Fig. C-F). AD is the most common neurodegenerative disease and leads to dementia and cognitive impairment. Multiple biochemical mechanisms, including amyloid-β plaques, neurofibrillary tangles, oxidative stress, neuroinflammation, mitochondrial dysfunction, etc., are crucially involved in the physiological and pathophysiological processes of AD. Recently, there has been an increasing interest in other potential pathogenic factors, such as EM . Aberrant EM is a critical factor in neurodegeneration, including AD . For example, an in vitro study in 2024 indicated that the rescue of mitochondrial EM could improve synaptic loss in human induced pluripotent stem cell (hiPSC)-based models of AD . To date, available anti-AD drugs (e.g., NMDA antagonists) may only relieve symptoms, and there is still no effective therapy to reverse disease progression for patients with AD. Therefore, there is an urgent need to identify potential therapeutic agents for the treatment of AD. TCM (e.g., DSS and the Bushenyizhi formula) has played an important role in the fight against complex diseases (e.g., AD) in China for thousands of years. In our previous study, we proposed an integrative systems pharmacology approach to determine the therapeutic MOAs of DSS against AD, which also provided a novel in silico paradigm to investigate effective substances in complex TCM prescriptions . Furthermore, DSS was shown to play a positive and effective role in increasing the content of docosahexaenoic acid by upregulating iPLA2 and 15-LOX, which ameliorated oxidative stress and inflammation and improved cognitive deficits in APP/PS1 mice . Although the literature has investigated DSS could alleviate AD through multiple pathways, the underlying mechanisms of DSS against AD have not been fully elucidated, and whether DSS could act on EM to exert therapeutic effects remains unknown. In this study, multi-omics approaches, including proteomics and metabolomics, were used to decipher the potential MOAs of DSS against AD. Specifically, we first performed the in vivo experiment for the pharmacodynamic verification of DSS against AD, and the results suggested that DSS alleviated cognitive impairment and could have a beneficial effect on neuronal apoptosis, as well as reducing Aβ plaques. Next, we constructed the D-T network of herbal ingredients in the DSS formula that focused on AD genes and found that 253 of them could be the core ingredients. Furthermore, we applied a proteomic strategy to preliminarily explore the potential mechanism of DSS against AD, which showed that DSS might exert a therapeutic effect on AD by regulating EM. To obtain a holistic change in the EM profile between the different groups (C vs. M vs. DH), we integrated the metabolomic data and performed a comprehensive analysis of the metabolite-AD gene network of DSS. A total of 47 altered metabolites were identified during the EM process and network analysis indicated that DSS could act on 14 AD genes to mediate EM-related metabolites. Mechanistically, DSS significantly improved brain glucose uptake by upregulating GLUT1 and GLUT4 protein expression and acted on the GSK3β/PGC1α signalling pathway in APP/PS1 mice. A previous study has confirmed that GSK3β could regulate brain EM and GSK3β inhibition could alter hippocampal EM . We speculated that DSS could regulate the GSK3β/PGC1α signalling pathway to promote EM by upregulating the levels of MMP, complex I-IV, ATP, and NADH to exert an anti-AD effect. The schematic illustration of this study is summarised in Fig. . Compared to previous studies, this study has several advantages. First, the latest components and disease genes from multiple authoritative databases were integrated, thereby laying a solid foundation for follow-up analysis. Second, multi-omics strategies, including metabolomics and proteomics, rather than single-omics analysis, were applied to investigate the molecular MOA, which facilitated a comprehensive explanation of the effects of DSS against AD. However, this study had several limitations. First, experimental verification of the study is not yet sufficiently deep, and the number of samples for omics analysis is not large because of the limited period and cost. Because mitochondria are the key organelles responsible for energy production and metabolism , the evaluation of mitochondrial function is necessary. Therefore, additional mitochondrial function test indices associated with EM should be performed by introducing larger samples in the future. Furthermore, TCM can act on multiple pathways and molecular mechanisms to alleviate AD . This study only explored DSS-formula-regulated EM to alleviate AD. Therefore, other MOAs, such as ferroptosis, require further validation. Finally, the precise components that relieve AD by modulating EM are unknown, and future in silico and in vivo experiments should be conducted to clarify promising monomer components. In summary, this study integrated multi-omics approaches to decipher the MOA of DSS against AD. Importantly, our study first proposed that DSS could treat AD by regulating the GSB3β/PGC1α signal pathway to promote EM, providing new perspectives and laying a good foundation for an in-depth understanding of DSS against AD. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 Supplementary Material 3 Supplementary Material 4
A 10-Year Study of Neonatal Sepsis from Tuen Mun Hospital, Hong Kong
8939cec2-0566-470f-a033-a3d25ee72df1
11945203
Pathologic Processes[mh]
Neonatal sepsis, invasive bacterial and fungal infections affecting newborn infants within the first 28 days of life, is a significant cause of neonatal morbidity and mortality. Neonatal sepsis is universal, but its epidemiology is often shaped by socioeconomic and biological factors. The incidence is higher in low- and middle-income countries . Compared to term neonates, preterm neonates are especially susceptible to sepsis and its associated mortality . Over the last two decades, antenatal screening and intrapartum antibiotic prophylaxis against group B streptococcal infection have been practised in many parts of the world and are now recommended by the World Health Organization . Nevertheless, neonatal sepsis remains a significant disease burden, and it is still increasing in some parts of the world . The diagnosis and treatment of neonatal sepsis is challenging. Clinical manifestations of neonatal sepsis are often non-specific , so positive cultures of specimens from normally sterile sites are recognised as the gold standard for the diagnosis . However, the sensitivity of such cultures is limited by the often low-volume specimens sampled from neonates , and antimicrobial consumption is high in both culture-negative and culture-positive cases . A combination of an aminoglycoside and a penicillin antibiotic has become a widespread choice of empiric therapy for neonatal sepsis for nearly half a century and continues to be the recommended regimen from prominent bodies [ , , ] despite the evolving aetiology and antimicrobial resistances, especially in lower-income regions . Neonatal sepsis is conventionally classified into early-onset sepsis and late-onset sepsis, of which the latter has a higher case load . Infections that occur in the first 72 h of life can often be traced to perinatal transmission. Those that occur beyond the first 3 days of life have a wide variety of aetiologies, including nosocomial transmission. The microbiological profiles, including pathogens and antibiotic susceptibility, are thus different between the two kinds of neonatal sepsis. Continuous surveillance, both at the institutional and national levels, is important to track the evolving epidemiology and thus to guide effective clinical management. With the recent implementation of universal antenatal screening for maternal group B Streptococcus colonisation and intrapartum antibiotic prophylaxis, there is a growing interest to study early-onset neonatal sepsis, especially neonatal group B streptococcal infection in Hong Kong. As there has not been any systematic study about neonatal sepsis in Hong Kong, we set out to understand the nature of this condition, including the microbiological profile and susceptibility of the microbial isolates, the kinds of neonates affected in terms of birth weight, the differences between early- and late-onset infections, and patterns of empirical antimicrobial treatments, by examining it through one of the major regional neonatal units in the city over a 10-year period. This is a retrospective study of clinical and laboratory data on patients with neonatal sepsis admitted to Tuen Mun Hospital. This is one of the major hospitals in Hong Kong that serves 17.1% of the total population by the year 2021 . The study period was from January 2014 to December 2023, inclusively. Sepsis was defined by the presence of pathogen(s) in the blood and/or cerebrospinal fluid (CSF) identified by the laboratory via culture. Blood samples were collected in BACTEC Peds Plus/F culture vials (Becton Dickinson, Sparks, MD, USA). The vials were incubated in the BD BACTEC FX blood culture system (Becton Dickinson, Sparks, MD, USA) for up to 5 days, and vials flagged positive were further subcultured onto solid culture media. CSF samples were collected in sterile glass Bijou bottles and inoculated onto solid culture media and Thiol broth (Sigma-Aldrich, St. Louis, MO, USA) enriched with pyridoxal hydrochloride (Sigma-Aldrich, St. Louis, MO, USA) and Vitox supplement (Oxoid Ltd., Basingstoke, UK) for up to 3 and 5 days, respectively. Growth on culture media was isolated and identified using the available methods (including conventional methods, API identification kits (bioMérieux, Marcy-l’Étoile, France), and BD Phoenix (Becton Dickinson, Sparks, MD, USA) automated identification system, and, since August 2016, the Bruker MALDI Biotyper system (Bruker Daltonik GmbH, Bremen, Germany) as well). Antimicrobial susceptibility testing and interpretation were performed according to Clinical and Laboratory Standards Institute guidelines. Subjects were included when they were under the chronological age of 28 days of life at the time of microbiological sampling. Bacillus , Corynebacterium , Cutibacterium , Lactobacillus , and Micrococcus spp. were considered culture contaminants in the present study and were excluded from analysis. For coagulase-negative Staphylococcus (CoNS), the cultures were considered significant if they were treated with at least 5 days of effective antibiotics with reference to susceptibility results; otherwise, they were also classified as contaminants and were excluded. If a patient has more than one culture request growing the same organism within the period of antibiotic treatment, those positive culture requests would count towards a single organism. Patient demographics, including the age at the first microbiological culture request that turned positive, gestation in weeks, birth weight in kilograms, and mortality, defined as death within 7 days of a positive blood/CSF culture, were recorded. Additional information collected includes whether patients, prior to the sepsis episode, were ever discharged into the community (defined as out-born) or had been hospitalised since birth (defined as in-born), any systemic antibiotic given in the intrapartum or neonatal period, as well as the antimicrobial choices used. The study period was divided into two five-year periods, the first five years 2014 to 2018, and the second five years 2019 to 2023. Patients were divided into two groups according to birth weight, the very low/extremely low birth weight (V/ELBW, <1.5 kg) and the normal/low birth weight (N/LBW, ≥1.5 kg) groups. The infections were classified into early-onset sepsis (EOS) if the positive cultures were obtained when the patient was under the age of 72 h of life and into late-onset sepsis (LOS) if patients were older than 72 h of life at the time of specimen collection [ , , ]. LOS was further classified into healthcare-associated (HA-LOS) if the positive cultures were obtained more than 48 h after admission, including patients who were discharged for less than 48 h before readmission, and community-acquired (CA-LOS) otherwise [ , , ]. Empiric therapy was defined as antibiotic treatment, alone as monotherapy or in combination therapy, used after blood and/or CSF samples were taken and prior to the report of positive microbiological findings. Empiric therapy was classified as concordant if at least one of the empirical antimicrobials being used was considered an agent for which the pathogen was susceptible based on identification and susceptibility testing, and as discordant if the pathogen was not susceptible to any of the empiric antimicrobials used for that patient. The included patients’ demographic data, microbiological findings, and treatment relevance were described. Mortality cases would be presented. The demographic data and microbiological findings were compared between the first and the second five-year periods to see if there were any changes over time. Microbiological data, including antibiotic nonsusceptibility, were compared between the different birth weight groups, EOS versus LOS, and between CA-LOS and HA-LOS. Statistical analysis was performed with Stata 14.2 (StataCorp, College Station, TX, USA). Odds ratios (ORs) with 95% confidence intervals (95% CIs) would be computed. Non-parametric data were compared by Fisher’s exact test. A p -value < 0.05 was considered significant. A total of 161 neonates were identified with microorganisms found in the blood and/or CSF cultures during the study period. Among them, 43 were excluded as the pathogens were considered contaminants. The remaining 118 neonates formed the subjects of this study, and their demographics are listed in . The patients’ demographic data were comparable between the two periods. The total number of live births during the 10-year study period was 45,555, with 27,552 from 2014 to 2018 and 18,003 from 2019 to 2024 . Thus, neonatal sepsis occurred at an annual incidence of 2.59 per 1000 live births. Comparing the figures between the first and second 5-year periods, there was a significant drop in the incidence of neonatal sepsis in the latter period (3.23 vs. 1.61 per 1000 live births, respectively; p < 0.001). When it was analysed separately, the difference was not observed among infants with early-onset neonatal sepsis (0.76 vs. 0.38 per 1000 live births; p = 0.13). However, the incidence of late-onset neonatal sepsis was significantly lower in the latter 5-year period (2.52 vs. 1.22 per 1000 live births; p = 0.003). This latter period from 2019 to 2024 coincided with the global COVID-19 pandemic. Of the 118 neonates, a total of 125 organisms were found. Among them, 117 organisms were identified from the blood only, 2 from CSF only, and 6 were positive in both the blood and the CSF. From the patient perspective, 113 had a single pathogen, 3 were infected by two organisms, and 2 had three organisms. The list of pathogens is listed in . Comparing the first and the second 5-year periods, the distribution of Gram-positive bacteria, Gram-negative bacteria, and fungi did not differ. The pattern of infection was compared between N/LBW and V/ELBW infants ( ). The V/ELBW infants were more often exposed to prior antibiotic treatment, and they had more infections with CoNS and yeasts. When EOS and LOS were compared ( ), streptococcal infections were more common in EOS, especially Streptococcus agalactiae (group B Streptococcus , GBS) and Streptococcus bovis group (SBG). On the other hand, CoNS and Gram-negative bacteria other than Escherichia coli were more commonly seen in LOS. Among those who presented with CA-LOS, the affected were exclusively N/LBW infants. GBS was exclusively encountered among CA-LOS cases ( ). Eight (6.8%) neonates died. Their clinical and laboratory features are listed in . All neonates were premature, and seven of them were born at 25 weeks of gestation or less. The pathogens included four yeasts, three Gram-negative bacteria, and one GBS. Concerning antibiotic nonsusceptibility of the bacterial pathogens ( ), most of the staphylococcal isolates were methicillin-resistant. This was even more significant among infants of V/ELBW. On the other hand, the nonsusceptibility of streptococcal isolates to penicillin was uncommon, with only two Streptococcus mitis group isolates being nonsusceptible. All Enterococcus faecalis strains were ampicillin-susceptible. No vancomycin resistance was found among the Gram-positive isolates. The majority of the Gram-negative pathogens were nonsusceptible to ampicillin, while nonsusceptibility to gentamicin and cefotaxime or ceftriaxone varied, with organisms found from the V/ELBW group exhibiting a higher rate of cefotaxime or ceftriaxone nonsusceptibility. No carbapenem resistance was found among the Gram-negative isolates. The vast majority of patients, 112 (94.9%) out of 118, were empirically prescribed at least two antimicrobials. Monotherapy was used empirically in four cases. One patient did not receive empirical antimicrobials prior to the positive culture result, and another died before the commencement of empirical antimicrobials. Throughout the study period, both ampicillin and cefotaxime were the most commonly prescribed antibiotics, and the combination corresponded to the most popular regimen used in the neonatal unit ( ). However, cefotaxime was used more often in the second 5-year period, while the usage of gentamicin went out of favour. On the other hand, there was a reciprocal relationship between the use of ampicillin and vancomycin when infants with different birth weights were compared. Ampicillin was preferred in N/LBW infants while vancomycin was used more often in V/ELBW infants. Among infants with LOS, the combination of cefotaxime with either ampicillin or vancomycin appeared to be the most popular regimen. Concordance of empirical antimicrobial therapy and antibiotic susceptibility was achieved in more than 60% of cases ( ). It appeared to be lower during the second 5-year period and in late-onset sepsis, but the difference was not statistically significant. However, the concordance rate was significantly poorer in septic V/ELBW infants. The current study provides the first overview of the microbiological profile of neonatal sepsis, both early- and late-onset sepsis, in Hong Kong over a period of 10 years. Although the number of infected neonates was fewer during the last five years as compared with the first, likely because of the COVID-19 pandemic , patient demographics such as the proportion of V/ELBW infants, the ratio of inborn to out-born infants, and the proportion of newborns receiving intrapartum antibiotics prior to the onset of sepsis remained the same. Also, the proportions of the Gram-positive bacteria, Gram-negative bacteria, fungi, the distribution of common pathogens, and the rate of antibiotic resistances were comparable between the first and the second 5-year periods. The sharp change in the rate of neonatal sepsis between the first 5 years (2014–2018) and the second 5 years (2019–2023) was an unexpected finding. This difference was not significant among the cases of EOS but was evident among the cases of LOS. This is understandable as EOS results from in utero transmission or spread of pathogens to the foetus through the maternal genital tract with organisms that commonly colonise the genitourinary and lower gastrointestinal tracts . The COVID-19 pandemic probably did not have any effects on such predisposition. On the other hand, LOS is more often related to nosocomial pathogen transmission and medical interventions during hospitalisation . The fall in the incidence of neonatal sepsis, in particular, HA-LOS, was most likely the result of more stringent isolation, reinforced visitor policies on infection control, and hygienic measures targeted at the facilities for pathogen spread . A similar fall in the incidence of LOS during the COVID-19 pandemic has also been observed in India , Italy , and Sweden , but not in the USA . This observation reinforces the importance of infection control measures in neonatal healthcare. GBS remains one of the major pathogens in neonatal sepsis. In Hong Kong, universal prenatal maternal carriage screening was implemented in 2012, and intrapartum antibiotic treatment against GBS was used in those women who were tested positive . The 16 cases represented an annual incidence of 0.36 per 1000 live births, comparable to the incidence rates of 0.26 and 0.24 per 1000 live births limited to the first seven days of life reported by Chan et al. and Ma et al. , respectively. GBS occurred more frequently in the first three days of life, accounting for nearly one-third of the pathogens identified. The infection occurred more often in neonates without intrapartum antibiotic prophylaxis as compared to those who had received antibiotics during the intrapartum or neonatal period. Additionally, in LOS, GBS was exclusively a community-acquired pathogen ( p < 0.001). The current study was not designed to look into the reasons why newborns continue to be affected by EOS due to GBS in the era of universal antenatal screening, but the unavailability of a timely screening as a result of unplanned or premature delivery and the limited sensitivity of the screening test have been suggested . Nevertheless, GBS remains susceptible to penicillin and is therefore always covered by the current empirical therapy regimens containing ampicillin or cefotaxime. SBG was another significant Gram-positive pathogen group in neonatal EOS. Apparently, neonatal SBG infection has been rising following the practice of universal antenatal screening and intrapartum antibiotic targeting GBS . Although our strains remained penicillin-susceptible, clinicians should be aware that SBG with reduced susceptibility to penicillin had been uncommonly reported [ , , ]. CoNS was the most frequent pathogen of LOS, of which the majority of the strains were methicillin-resistant. Because CoNS are skin commensals , neonatal CoNS infections are usually considered to be nosocomial in nature, facilitated by biofilm formation in medical devices such as venous catheters . Our data showed that neonates in the V/ELBW group were particularly vulnerable to CoNS infections. Although most CoNS species are understood to be of low virulence, premature newborns are at higher risk of CoNS infections than their term counterparts because they often require prolonged hospitalisation and medical interventions . Gram-negative infections occurred in both EOS and LOS. Escherichia coli was the only Gram-negative pathogen in EOS, which was consistent with the observation that Escherichia coli is the predominant aerobic Gram-negative organism of the birth canal . Non- Escherichia coli Gram-negative infections were exclusively seen in LOS in our series. This is similar to the findings from Yale University School of Medicine looking into the microbiology of neonatal sepsis from 1989 to 2003, where the majority of non- Escherichia coli Gram-negative infections occurred in neonates aged 5 days or above (5–30 days: 63/90 vs. 0–4 days: 9/35) . A more recent study from Sweden also showed non- Escherichia coli Gram-negative infections occurring in LOS more frequently than in EOS (LOS: 51/74 vs. EOS: 9/33) . Our study shows CA-LOS can include organisms that typically affect the older infant, such as Campylobacter jejuni and Neisseria meningitidis . On the other hand, the neonate with prolonged hospitalisation is susceptible to infections due to Gram-negative organisms, often contributed by suboptimal infection prevention and control practises leading to horizontal transmission by healthcare personnel. In addition, the gut of the preterm neonate is colonised with firmicutes and proteobacteria in greater quantities (as most neonatal aerobic Gram-negative pathogens belong to the latter group) , often contributed by the environment of the neonatal unit and antibiotic exposures . Gut dysbiosis often precedes LOS and is associated with gut translocation [ , , ], with studies showing similarity of bacterial strains of bloodstream infections to strains found in the stools of such patients before the onset of infection in neonatal units . The mortality of invasive fungal infection in neonates is particularly high . From our data, even among HA-LOS patients in the V/ELBW group, invasive fungal infection was still significantly associated with mortality compared to sepsis due to bacterial organisms (4/7 vs. 3/26, p = 0.009). First-line empirical treatment against suspected neonatal invasive candidiasis involves the use of amphotericin B deoxycholate , but there is a paucity of guidance regarding which neonates should be initiated on empirical antifungal treatment before microbiological results are available. Amphotericin B is not frequently prescribed, as neonatal invasive infection remains uncommon, and amphotericin B is associated with many potential toxicities . As retrospective studies have suggested a possible benefit of early administration of empirical antifungal in cadidaemic neonates , it would be important to develop recommendations with regard to which patients deserve empirical therapy. On the other hand, guidelines have suggested the use of antifungal prophylaxis in the neonatal unit under specific circumstances [ , , ], with studies demonstrating variable successes . Antimicrobial resistance has emerged as a global problem, and neonatal units are not spared . Poorer survival has been noted in neonates infected with multidrug-resistant organisms, especially carbapenem-resistant Gram-negative organisms . In the current study, carbapenem resistance was not an issue, but almost half of the Gram-negative isolates among E/VLBW neonates were nonsusceptible to cefotaxime or ceftriaxone. Although the inclusion of a third-generation cephalosporin as part of the empirical antimicrobial use in neonatal sepsis is an attractive option, such use is associated with an increase in the risk of multidrug-resistant Gram-negative bacterial infection and, especially in preterm infants, candidal colonisation and invasive fungal infections [ , , ]. A Netherlands study has shown that a regimen containing cefotaxime has a high tendency to result in more cephalosporin resistance in two neonatal units, necessitating the use of meropenem in some patients . Currently, cefotaxime is recommended by guidelines for the empirical treatment of neonatal meningitis, community-acquired neonatal sepsis, and in neonates with microbiological evidence of Gram-negative infections . The current study faces several limitations. Although the neonatal unit provided hospitalisation service for all neonates in need in the region served by Tuen Mun Hospital, the single-centre nature of the study limits its generalisability. We have not included patients with culture-negative neonatal sepsis because differentiating from non-infective aetiologies could be difficult . We have not included positive cultures other than blood and CSF, as these are the most common normally sterile specimen types from the neonatal unit. We acknowledge that, in this study, the exclusion of the organisms defined as contaminants, although very rarely causing infections in neonates, may lead to an underestimation of patients with neonatal sepsis, while the approach of using treatment duration for determining the significance of CoNS may lead to overestimation. We presented the first epidemiology and microbiology of culture-proven neonatal sepsis from a single institution in Hong Kong in the post-universal antenatal group B Streptococcus screening era. Early-onset sepsis was mainly caused by group B Streptococcus , Streptococcus bovis group, and Escherichia coli. In late-onset sepsis, coagulase-negative Staphylococcus and non- Escherichia coli Gram-negative bacteria and yeasts assumed significance. Overall, cefotaxime consumption had increased in the latter half of the study period. Within these five years when the COVID-19 pandemic reigned, there was a significant drop in neonatal sepsis, especially late-onset sepsis. There is a need for antimicrobial stewardship in the neonatal unit to optimise appropriate therapy for sick neonates, particularly in premature neonates where the concordance of empirical antimicrobial therapy was lower.
Comparative lipidome and transcriptome provide novel insights into zero-valent iron nanoparticle-treated
d5fd7d48-da31-47f1-be56-bbe5628ec342
11599946
Biochemistry[mh]
Only a few microorganisms, including yeasts, molds, algae, and cyanobacteria can accumulate lipids to more than 20% of their dry cell weight and are thus termed as oleaginous . Of these, cyanobacteria are ideal third and fourth-generation organisms for biotechnological advancements, particularly in the fields of biofuel production and carbon capture . Although cyanobacteria-derived lipids store twice as much energy per carbon atom as carbohydrates, offering a two fold increase in fuel energy content, achieving commercial viability for cyanobacterial fuel remains a significant challenge , . Of the various species, Fremyella diplosiphon , a model organism renowned for its short generation cycle and adaptability to a wide range of light, offers a unique platform for unraveling the intricate molecular responses induced by environmental stressors such as salinity and light . With its abundant high-value fatty acid methyl esters (FAMEs) in transesterified lipids, including value-added 7-hexadecanoic acid (C16:1) and 9-octadecenoic acid (C18:1) methyl esters, F. diplosiphon has garnered significant attention as a promising candidate for bioenergy production . Although the impact of abiotic stressors such as heat, cold, salinity, nitrogen starvation, photo-oxidation, anaerobiosis, and osmotic changes on microorganisms are well established – , chemical stressors such as nanoparticles and their consequences are yet to be fully understood. Specifically, metallic nanoparticles have started to gain preeminent importance due to their distinctive optical and magnetic characteristics that significantly impact biological processes such as growth, cellular physiology, photosynthetic activity, pigmentation, gene expression, and macromolecule synthesis, including lipid content – . This impact is particularly evident in photosynthesis and pigmentation, where the interaction of nanoparticles with cellular structures can enhance or inhibit these processes depending on the concentration, known as the “hormetic effect” , . Several studies, such as those by Du et al. using Zn nanoparticles on Microcystis aeruginosa , and Bytešníková et al. using graphene oxide nanoparticles on Chlamydomonas reinhardtii , have explored these interactions, providing valuable insights into the mechanisms by which nanoparticles influence cellular functions. As vital components of cell membranes, lipids are crucial for maintaining structural integrity and functionality and regulating the pores that control the movement of substances in and out of the cell . Furthermore, lipids such as sterols are essential in modulating intracellular signaling pathways , and can alter gene expression involved in the formation of lipid rafts that compartmentalize cellular processes along with the synthesis of secondary messengers in signal transduction and the provision of energy storage through fatty acid metabolism , . Therefore, nanoparticles can influence lipid metabolism by altering enzymatic activities involved in lipid synthesis and degradation. Additionally, the interaction of nanoparticles with lipid bilayers can alter membrane properties such as fluidity, permeability, and phase behavior , since unsaturated fatty acids are essential constituents of polar glycerolipids in biological membranes, and the unsaturation level of membrane lipids is crucial for controlling membrane fluidity . In living organisms, the regulation of membrane fluidity, which is critical for the normal functioning of biological membranes, plays a significant role in tolerance and acclimatization to environmental stresses through a process known as “homeoviscous adaptation”, which involves changes in membrane fatty acid composition as an adaptive response to environmental changes . For instance, in a study on the cyanobacterium Arthrospira maxima treated with 5.1 mg L⁻¹ nZVIs, notable changes in lipid composition, including a decrease in saturated and monounsaturated fatty acids, and an increase in polyunsaturated fatty acids was reported . Consequently, analyzing these interactions is crucial for evaluating the potential benefits and risks associated with nanoparticle exposure in biological systems, as they can have profound implications for cellular physiology, stress response, and overall organismal health. In addition to lipid analysis, understanding the impact of nanoparticles on the transcriptome of cyanobacteria can provide crucial insights into the molecular mechanisms underlying their responses to these environmental stressors. Depending on the type of nanoparticle exposure, altered gene expression, metabolism, and homeostasis to metals have been documented . By analyzing changes in gene expression patterns, researchers can elucidate the adaptive strategies cyanobacteria employ when exposed to nanoparticles, potentially uncovering novel pathways for biotechnological applications. A study investigating the impact of 200 mg L −1 nZVIs on the Dictyosphaerium sp . transcriptome revealed 2,065 differentially expressed genes, with 400 upregulated and 1,665 downregulated compared to the untreated control . This significant alteration in gene expression highlights the profound influence of nanoparticles on cellular processes such as pigment synthesis, photosynthetic efficiency, and stress response pathways, underscoring the importance of understanding these interactions in advancing both environmental and biotechnological research. In addition to profiling genes using the high-throughput transcriptome sequencing (RNA-seq) approach, low-abundance transcripts, which account for approximately 30% of most transcriptomes and novel exons and splice junctions can be quantified . Moreover, the choice of nanoparticles used in culture treatment can profoundly impact differential gene expression, offering valuable insights into their biological effects. In a study by Mortimer et al. , carbon and boron nitride-based nanoparticles induced significant transcriptomic responses in bacteria at non-growth-inhibitory concentrations of 10 mg L −1 . Specifically, multiwall carbon nanotubes induced the expression of 111 genes, while graphene, boron nitride, and carbon black differentially regulated 44, 26, and 25 genes, respectively. Despite a report on the impact of optimal nZVI concentration on lipid production in F. diplosiphon , there have been no studies to examine their impact under moderate and high ROS levels in this model species. In addition, there exists a gap in understanding the impact of nZVIs on the differentially expressed targeted genes in this species. Therefore, in the present investigation, we aimed to explore the impact of moderate and sub-optimal concentrations of nZVIs on F. diplosiphon FAME profiles using using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GCxGC/TOF-MS). In addition, differential expression of targeted genes and signaling pathways in F. diplosiphon under moderate ROS induced by nZVIs was examined using whole transcriptome sequencing. Strains, culture conditions, and nanoparticles F. diplosiphon strains, B481-WT from the UTEX algal culture collection repository (Austin, TX, United States) and B481-SD (B481-WT strain overexpressed with the sterol desaturase gene; accession MH329183) were used in this study. Cultures were grown in liquid BG-11 medium containing 20 mM HEPES to an exponential growth phase (~ optical density 750 nm of 0.8) under continuous shaking at 170 rpm and 28 °C in an Innova 44R incubator shaker (Eppendorf, Hamburg, Germany). Light wavelengths in the shaker were adjusted to 30 µmol m −2 s −1 using the model LI-190SA quantum sensor (Li-Cor, Lincoln, NE, USA). Nanofer 25s zero-valent iron nanoparticles (nZVIs), sourced from Nano Iron Company (Rajhrad, Czech Republic), were added to liquid media at final concentrations of 3.2 mg L −1 and 25.6 mg L −1 . These concentrations were selected based on a previous report of their moderate and high levels of ROS in F. diplosiphon . Quantification of total lipids in nZVI-treated Fremyella diplosiphon by gravimetric analysis F. diplosiphon B481-WT and B481-SD strains were grown in BG-11 media containing 3.2 and 25.6 mg L −1 nZVIs for 15 days in conditions mentioned in Sect. 2.1. Cultures grown in the absence of nZVIs under similar conditions served as control. Cells were pelleted, lyophilized, and total lipids extracted using a chloroform: methanol method based on Folch et al. . Briefly, a 15 ml methanol/chloroform (2:1, v/v) solvent was added to 100 mg of lyophilized cells. The mixture was homogenized, incubated on ice, and centrifuged at 3000 rpm for 10 min to collect the organic phase. The residue was re-extracted thrice with 4 ml of methanol and 2 ml of chloroform, and the organic phase dried in a rotary evaporator to estimate the total lipid content. GCxGC/TOF-MS analysis of transesterified lipids in nZVI-treated F. diplosiphon F. diplosiphon lipids were converted to FAMEs through direct transesterification as described by Wahlen et al. . Lyophilized cells (100 mg) were dissolved in 3 ml methanol containing 1.8% (v/v) sulfuric acid and subjected to 80 °C for 20 min in a commercial multimode scientific microwave (CEM Corporation, USA) with a maximum power output set at 25 W per sample. The reaction was quenched with 4 ml chloroform, washed in distilled water, and centrifuged at 2000 rpm for phase separation. The chloroform phase containing FAMEs and lipids was removed, and the remaining biomass was washed twice with 2 ml chloroform. The transesterified lipids were dried under nitrogen gas and then reconstituted in 2 ml of dichloromethane, with cholestane (50 µg ml −1 ) added as an internal standard. FAMEs and alkanes were characterized and quantified by GCxGC/TOF-MS as described by Tabatabai et al. . Differential gene expression in wild-type and nano-treated F. diplosiphon variants Total RNA from F. diplosiphon B481-WT and B481-SD strains grown in 3.2 mg L −1 nZVIs and the untreated control was extracted according to the modified method of Fathabad et al. . RNA integrity was verified prior to library preparation. After removing ribosomal RNA and fragmentation, the first strand of cDNA was synthesized with random hexamer primers, using a reaction buffer containing dTTPs instead of dUTPs for the second strand. The directional library was subjected to end repair, A-tailing, adapter ligation, size selection, USER enzyme digestion, amplification, and purification. Qubit and real-time PCR were used to quantify the library, and a bioanalyzer was used to detect size distribution. Sequencing was performed on the Illumina platform using paired-end reads after cluster generation (Novogene Corporation Inc., CA). Raw data reads of FASTQ format were processed by removing poly-N sequences and low-quality reads, and Q20, Q30, and GC content calculated. Reference genome and gene model annotation files were downloaded from JGI genome portal. Building the index of the reference genome and aligning clean reads were performed using Bowtie 2 software (v23.4.3) . Statistical analysis Total lipid and FAME analysis Total lipids and FAME content were reported as a cumulative treatment mean ± standard error. Statistical significance was determined using one-way analysis of variance and Tukey’s honest significant difference post-hoc test at 95% confidence intervals ( p < 0.05). The single-factor, fixed-effect ANOVA model, Yij = µ + αSi + εij, was used where Y is the total lipid content in strain i and biological replicate j. The µ represents the overall total lipid content with adjustments from the effects of strain (αS), and εij is the experimental error from strain i and biological replicate j. Gene structure analysis and quantification of gene expression level Novel genes, operon, and transcription start sites were identified using Rockhopper. The upstream 700 bp sequence of the transcription start site for predicting the promoter using a Time-Delay Neural Network was extracted. Fragments per kilobase of transcript per million mapped reads (FPKM) of each gene were calculated based on gene length and the read count mapped to this gene. The expected number of FPKM transcript sequences was used to determine the effect of sequencing depth and gene length for the read count at the same time. Differential expression analysis Differential expression analysis consisting of two biological replicates was performed using the RNA-seq data DESeq2 R package. Negative binomial distribution was employed to determine differential expression in digital gene expression data and resultant p -values were adjusted using Benjamini and Hochberg’s approach for controlling the false discovery rate. Genes with an adjusted p -value < 0.05 determined by DESeq were assigned as differentially expressed. Prior to differential gene expression analysis, the read counts were adjusted for each sequenced library by Trimmed Mean of M- values through one scaling normalized factor. The edgeR package (v3.24.3) was used to analyze the differential expression of two conditions. The corrected p- value of 0.05 and |Log2 (fold change) | of 1 were set as the threshold for significantly differential expression. GO and KEGG enrichment analyses of differentially expressed genes Gene Ontology (GO) enrichment analysis of differentially expressed genes, incorporating a correction for gene length, was performed by the cluster-Profiler R package (v3.8.1). We considered GO terms with corrected p values less than 0.05 to be significantly enriched by differentially expressed genes. The cluster-Profiler R package was used to test the differential expression of genes in KEGG pathways for statistical enrichment. F. diplosiphon strains, B481-WT from the UTEX algal culture collection repository (Austin, TX, United States) and B481-SD (B481-WT strain overexpressed with the sterol desaturase gene; accession MH329183) were used in this study. Cultures were grown in liquid BG-11 medium containing 20 mM HEPES to an exponential growth phase (~ optical density 750 nm of 0.8) under continuous shaking at 170 rpm and 28 °C in an Innova 44R incubator shaker (Eppendorf, Hamburg, Germany). Light wavelengths in the shaker were adjusted to 30 µmol m −2 s −1 using the model LI-190SA quantum sensor (Li-Cor, Lincoln, NE, USA). Nanofer 25s zero-valent iron nanoparticles (nZVIs), sourced from Nano Iron Company (Rajhrad, Czech Republic), were added to liquid media at final concentrations of 3.2 mg L −1 and 25.6 mg L −1 . These concentrations were selected based on a previous report of their moderate and high levels of ROS in F. diplosiphon . Fremyella diplosiphon by gravimetric analysis F. diplosiphon B481-WT and B481-SD strains were grown in BG-11 media containing 3.2 and 25.6 mg L −1 nZVIs for 15 days in conditions mentioned in Sect. 2.1. Cultures grown in the absence of nZVIs under similar conditions served as control. Cells were pelleted, lyophilized, and total lipids extracted using a chloroform: methanol method based on Folch et al. . Briefly, a 15 ml methanol/chloroform (2:1, v/v) solvent was added to 100 mg of lyophilized cells. The mixture was homogenized, incubated on ice, and centrifuged at 3000 rpm for 10 min to collect the organic phase. The residue was re-extracted thrice with 4 ml of methanol and 2 ml of chloroform, and the organic phase dried in a rotary evaporator to estimate the total lipid content. F. diplosiphon F. diplosiphon lipids were converted to FAMEs through direct transesterification as described by Wahlen et al. . Lyophilized cells (100 mg) were dissolved in 3 ml methanol containing 1.8% (v/v) sulfuric acid and subjected to 80 °C for 20 min in a commercial multimode scientific microwave (CEM Corporation, USA) with a maximum power output set at 25 W per sample. The reaction was quenched with 4 ml chloroform, washed in distilled water, and centrifuged at 2000 rpm for phase separation. The chloroform phase containing FAMEs and lipids was removed, and the remaining biomass was washed twice with 2 ml chloroform. The transesterified lipids were dried under nitrogen gas and then reconstituted in 2 ml of dichloromethane, with cholestane (50 µg ml −1 ) added as an internal standard. FAMEs and alkanes were characterized and quantified by GCxGC/TOF-MS as described by Tabatabai et al. . F. diplosiphon variants Total RNA from F. diplosiphon B481-WT and B481-SD strains grown in 3.2 mg L −1 nZVIs and the untreated control was extracted according to the modified method of Fathabad et al. . RNA integrity was verified prior to library preparation. After removing ribosomal RNA and fragmentation, the first strand of cDNA was synthesized with random hexamer primers, using a reaction buffer containing dTTPs instead of dUTPs for the second strand. The directional library was subjected to end repair, A-tailing, adapter ligation, size selection, USER enzyme digestion, amplification, and purification. Qubit and real-time PCR were used to quantify the library, and a bioanalyzer was used to detect size distribution. Sequencing was performed on the Illumina platform using paired-end reads after cluster generation (Novogene Corporation Inc., CA). Raw data reads of FASTQ format were processed by removing poly-N sequences and low-quality reads, and Q20, Q30, and GC content calculated. Reference genome and gene model annotation files were downloaded from JGI genome portal. Building the index of the reference genome and aligning clean reads were performed using Bowtie 2 software (v23.4.3) . Total lipid and FAME analysis Total lipids and FAME content were reported as a cumulative treatment mean ± standard error. Statistical significance was determined using one-way analysis of variance and Tukey’s honest significant difference post-hoc test at 95% confidence intervals ( p < 0.05). The single-factor, fixed-effect ANOVA model, Yij = µ + αSi + εij, was used where Y is the total lipid content in strain i and biological replicate j. The µ represents the overall total lipid content with adjustments from the effects of strain (αS), and εij is the experimental error from strain i and biological replicate j. Gene structure analysis and quantification of gene expression level Novel genes, operon, and transcription start sites were identified using Rockhopper. The upstream 700 bp sequence of the transcription start site for predicting the promoter using a Time-Delay Neural Network was extracted. Fragments per kilobase of transcript per million mapped reads (FPKM) of each gene were calculated based on gene length and the read count mapped to this gene. The expected number of FPKM transcript sequences was used to determine the effect of sequencing depth and gene length for the read count at the same time. Differential expression analysis Differential expression analysis consisting of two biological replicates was performed using the RNA-seq data DESeq2 R package. Negative binomial distribution was employed to determine differential expression in digital gene expression data and resultant p -values were adjusted using Benjamini and Hochberg’s approach for controlling the false discovery rate. Genes with an adjusted p -value < 0.05 determined by DESeq were assigned as differentially expressed. Prior to differential gene expression analysis, the read counts were adjusted for each sequenced library by Trimmed Mean of M- values through one scaling normalized factor. The edgeR package (v3.24.3) was used to analyze the differential expression of two conditions. The corrected p- value of 0.05 and |Log2 (fold change) | of 1 were set as the threshold for significantly differential expression. GO and KEGG enrichment analyses of differentially expressed genes Gene Ontology (GO) enrichment analysis of differentially expressed genes, incorporating a correction for gene length, was performed by the cluster-Profiler R package (v3.8.1). We considered GO terms with corrected p values less than 0.05 to be significantly enriched by differentially expressed genes. The cluster-Profiler R package was used to test the differential expression of genes in KEGG pathways for statistical enrichment. Total lipids and FAME content were reported as a cumulative treatment mean ± standard error. Statistical significance was determined using one-way analysis of variance and Tukey’s honest significant difference post-hoc test at 95% confidence intervals ( p < 0.05). The single-factor, fixed-effect ANOVA model, Yij = µ + αSi + εij, was used where Y is the total lipid content in strain i and biological replicate j. The µ represents the overall total lipid content with adjustments from the effects of strain (αS), and εij is the experimental error from strain i and biological replicate j. Novel genes, operon, and transcription start sites were identified using Rockhopper. The upstream 700 bp sequence of the transcription start site for predicting the promoter using a Time-Delay Neural Network was extracted. Fragments per kilobase of transcript per million mapped reads (FPKM) of each gene were calculated based on gene length and the read count mapped to this gene. The expected number of FPKM transcript sequences was used to determine the effect of sequencing depth and gene length for the read count at the same time. Differential expression analysis consisting of two biological replicates was performed using the RNA-seq data DESeq2 R package. Negative binomial distribution was employed to determine differential expression in digital gene expression data and resultant p -values were adjusted using Benjamini and Hochberg’s approach for controlling the false discovery rate. Genes with an adjusted p -value < 0.05 determined by DESeq were assigned as differentially expressed. Prior to differential gene expression analysis, the read counts were adjusted for each sequenced library by Trimmed Mean of M- values through one scaling normalized factor. The edgeR package (v3.24.3) was used to analyze the differential expression of two conditions. The corrected p- value of 0.05 and |Log2 (fold change) | of 1 were set as the threshold for significantly differential expression. Gene Ontology (GO) enrichment analysis of differentially expressed genes, incorporating a correction for gene length, was performed by the cluster-Profiler R package (v3.8.1). We considered GO terms with corrected p values less than 0.05 to be significantly enriched by differentially expressed genes. The cluster-Profiler R package was used to test the differential expression of genes in KEGG pathways for statistical enrichment. With a predicted proteome to produce and secrete fatty acids, cyanobacteria offer diverse biological and industrial implications. Their inherent attributes, such as environmental sustainability and enhanced adaptive capability of phycobiliproteins, make these organisms a viable source of bioproducts. The present research elucidates new approaches to studying total lipid composition and transcriptome changes in F. diplosiphon strains B481-WT and B481-SD exposed to varying nZVI concentrations. Total lipids in nZVI-treated F. diplosiphon B481-WT and B481-SD Quantitative analysis by gravimetry offers highly precise and reproducible measurements of total lipid content in various organisms, including cyanobacteria and microalgae , . Comparison of total extracted lipids in B481-SD treated with 3.2 and 25.6 mg L −1 nZVIs revealed significantly higher ( p < 0.05) total lipid content compared to B481-WT at similar concentrations (Fig. ). In addition to the cultures treated with nZVIs at both the concentrations, total lipid content in B481-SD was significantly higher than lipids in the untreated B481-WT (Fig. ). These findings suggest that overexpression of the sterol desaturase gene in B481-SD could have enhanced lipid production by catalyzing an intermediate step in the synthesis of major sterols, in addition to influencing cellular signaling and membrane lipid distribution . In addition, enhanced tolerance to stressors such as β-lactam (ampicillin, cefotaxime) and aminoglycoside (kanamycin) antibiotics in B481-SD compared to the B481-WT strain has been reported . F. diplosiphon FAMEs in optimal and sub-optimal nZVI concentrations In addition to the total lipid content, we analyzed FAME composition, which is a crucial indicator in determining cyanobacterial lipid quality. As the most abundant type of FAMEs in F. diplosiphon , we observed hexadecanoic acid methyl ester (C16:0) to comprise 66.4% and 53.3% of the total FAMEs in the untreated B481-WT and B481-SD strains respectively. In B481-WT, the percentage of hexadecanoic acid was lower than the untreated control, at 52.6% and 51.9% in 3.2 mg L⁻¹ and 25.6 mg L⁻¹ nZVIs, respectively. The alteration in the predominant saturated fatty acid profile of B481-WT could be attributed to its vulnerability to nZVIs, which could have altered the FAMEs. It should be noted that the heightened sensitivity in B481-WT is possible due to the lack of the overexpressed sterol desaturase gene, unlike B481-SD. In a similar study by Fazelian et al. , a significant decrease in the percentage of hexadecanoic acid methyl ester (C16:0), from 22.84% in the untreated control to 13.73% in Nannochloropsis oculata treated with Fe 2 O 3 -NPs was reported. However, we observed the highest percentage of C16:0 in B481-SD at 3.2 mg L −1 nZVI concentration (55.71%) compared to the untreated control (52.47%) and 25.6 mg L −1 nZVIs (51.78%) (Fig. ). Since hexadecanoic acid is present in cyanobacterial/microalgal strains ranging from 23 to 43% and the composition of fatty acids is influenced by several parameters , , it is not surprising to observe variations in C16:0 between the two strains tested in this study. Thus, alterations in fatty acid saturation can influence the resilience of the species to abiotic stressors such as nZVIs. Similar observations have been reported in other cyanobacterial species, where an increase in unsaturated fatty acid content occurred as an adaptive response for maintaining membrane fluidity under stress conditions and is attributed to the lower melting point of unsaturated fatty acids compared to saturated fatty acids . This activation could serve as a protective mechanism, where elevated SFAs and MUFAs stabilize cell membranes against oxidative stress induced by nZVIs. In addition, we observed a 1.3X increase in unsaturated fatty acids in untreated B481-SD compared to the untreated B481-WT (Table ). These results are in accordance with a previous finding where overexpression of the sterol desaturase gene resulted in higher unsaturated fatty acids, possibly due to enhanced intracellular signal activation . In addition, the genes encoding acyl-CoA desaturase (2.7X) and fatty acid desaturase (1.2X) in B481-SD were significantly enhanced compared to untreated B481-WT (Table A). Our results on lipid profiling revealed a ~ 1.5X increase in monounsaturated fatty acids in B481-WT treated with 3.2 and 25.6 mg L −1 nZVIs compared to the untreated control, while no significant changes were observed in B481-SD (Table ). Interestingly, we noted ~ 1.4X more MUFAs in the untreated B481-SD compared to untreated B481-WT (Table ). These results indicate that nZVIs at both 3.2 and 25.6 mg L −1 concentrations could significantly impact MUFA production in B481-WT; however, in comparison to the untreated cultures, B481-SD (10.67%) exhibited higher MUFAs than B481-WT (7.61%), possibly due to alterations in lipid saturation linked to sterol desaturase. In a similar study by Lu et al. , exposure of Chlamydomonas sp. to isobutyl cyanoacrylate nanoparticles resulted in increased MUFAs, including C18: 1. These changes were attributed to intracellular stress-associated alterations in cyanobacterial lipid profile distribution. Additionally, a strong connection between antioxidant activity and MUFAs was demonstrated in a comparative study, in which Gloeocapsa sp. had the highest MUFA content (27.01 ± 0.15) corresponding to the highest antioxidant DPPH activity (88.84 ± 10.26), surpassing Chroococcidiopsis sp . (19.71 ± 0.93) and Merismopedia sp . (27.32 ± 2.83) . These data support our findings that the increased MUFA levels observed in B481-WT and B481-SD treated with 3.2 mg L −1 nZVIs relate to the antioxidant-related benefits of MUFAs and improved oxidative stress resilience. Further, our findings revealed that 61.2 to 80.5% of the extracted FAMEs consisted of SFAs (Table ). It was interesting to note fewer SFAs in B481-WT treated with 3.2 (62.82%) and 25.6 (66.05%) mg L −1 nZVIs compared to the untreated control (78.63%). In a similar study on Nannochloropsis oculata treated with Fe 2 O 3 -NPs, an 8.69% decrease in SFAs compared to the untreated control was reported . However, we observed 65.72% and 64.77% SFAs in B481-SD treated with 3.2 and 25.6 mg L −1 nZVIs compared to 61.18% in the control, suggesting the impact of the overexpressed sterol desaturase gene on B481-SD fatty acid composition. It is important to note the role and benefits of SFAs for industrial applications, including their use in renewable energy production. Despite their poor stability at low temperatures, SFAs are essential constituents of FAMEs like the MUFAs, contributing to high combustion properties such as higher cetane number and greater calorific value . Thus, a balanced enhancement of SFAs and MUFAs in response to nZVI treatment is crucial for achieving optimal bioproducts. As energy-dense sources, alkanes represent another vital group of key infrastructure-compatible biofuel components synthesized by cyanobacteria. Our findings indicated significantly high ( p < 0.05) alkane content of 1.4X in B81-WT and 1.5X in B481-SD strains treated with 25.6 mg L −1 nZVIs (Fig. ). The observed increase in alkane content can be attributed to the potential role of nZVIs in altering the activity of enzymes such as acyl-ACP reductase and aldehyde deformylating oxygenase, which are key in converting fatty acyl-ACP to alkanes, thus boosting the overall energy content . This increase could be attributed to modulations in redox balance, which aid in regulating cyclic electron flow and suppressing ROS , thus enabling the organism to withstand stress while maintaining optimal growth and productivity. Similar findings were observed in a previous study in F. diplosiphon B481-SD treated with 3.2 mg L −1 nZVIs and 0.8 mg L −1 ampicillin, which resulted in a 2–4 fold augmentation in alkanes compared to the untreated control . Furthermore, in a study by Schirmer et al. , a total of 17 genes were identified by subtractive genome analysis in alkane-producing cyanobacteria, while these genes were notably absent in non-alkane-producing species. Of the seventeen, two genes (AAR and ADO), were found to code for crucial enzymes (acyl-ACP reductase and aldehyde-deformylating oxygenase), which led to the discovery of the alkane biosynthesis pathway using fatty acyl ACP as its substrate. Differential gene expression in nZVI-treated F. diplosiphon To gain insight into the transcriptome changes in F. diplosiphon treated with 3.2 mg L −1 nZVIs, we compared the differential gene expression of treated cells to the untreated control. All comparisons were made using non-redundant lists of genes after the removal of gene IDs without an entire number. When more than one gene (pseudogenes) was recognized, one was selected, and in the case of redundant gene ID numbers, the one that was most altered was chosen. Our results indicated specific cellular functions to be regulated in nZVI-treated F. diplosiphon , in addition to the function of genes and the molecular mechanisms underlying specific biological processes. The reads from Tolypothrix sp. PCC 7601 were aligned to the F. diplosiphon reference genome and used as a reference. Using the methodology and filters outlined above, we generated lists of changes in 2-fold altered genes. In total, about 30 million raw paired-end reads were generated for each sample, with an average of 29,625,787 for B481-WT and 31,608,579 for B481-SD. A comparison of expressed genes in the two strains revealed a higher number of genes in the F. diplosiphon B481-SD. As illustrated in the Venn diagram (Fig. ), a total of 715 genes were uniquely expressed in B481-WT treated with 3.2 mg L −1 nZVIs, 24 genes in the untreated control, and 7405 genes were commonly expressed. On the contrary, strain B481-SD treated with 3.2 mg L −1 nZVIs revealed 162 genes while 219 genes were detected in the control; however, 7105 genes were commonly expressed (Fig. ). Analysis of differentially expressed genes (DEG) Log2 fold change of B481-WT treated with 3.2 mg L −1 nZVIs revealed 112 up-regulated, 1539 down-regulated genes, and 6576 unaltered genes compared to the untreated control (Fig. ). Interestingly, we observed fewer downregulated but more upregulated genes in F. diplosiphon B481-SD, in which Log2 fold changes in 3.2 mg L −1 nZVI-treated cells vs. B481-SD control indicated 863 down-regulated genes and 948 up-regulated genes as depicted in the differential gene count and volcano plot (Fig. ). These results suggest that iron levels are pivotal to ensure the balance in genetic expression changes. Alternatively, it is also possible that iron limitation can be crucial in both the up-regulation and down-regulation of genes. A previous study has shown that severe iron limitation caused the downregulation of 36 genes and the upregulation of 35 genes, while mild iron limitation resulted in the downregulation of 27 genes and the upregulation of 62 genes in Synechococcus sp. PCC7002 . The gene ontology (GO) enrichment histogram of co-upregulated and downregulated genes is shown in Fig. . In the three groups (BP, blue; MF, red; and CC, green), the EC score was calculated for each GO category and compared with that of random sampling (purple) to estimate the statistical significance level. The histogram intuitively reflects the distribution of the number of DEGs in GO terms enriched in BP, CC, and MF. The ordinate is the enriched GO term, and the abscissa is the number of differentially expressed genes in the term. The application of nZVIs can significantly influence the photosynthetic efficiency of cyanobacteria by interacting with photosystem I (PSI) and photosystem II (PSII). A previous study has shown that nZVIs can enhance electron transport rates in PSI and PSII , improving light absorption and energy conversion. Furthermore, experiments have demonstrated that nanoparticles, by eliminating unwanted wavelengths, can increase the target wavelengths, thereby promoting pigment content and enhancing the expression of photosynthetic genes . In this study, we observed an upregulation of the genes involved in photosynthesis, including psaI , psaX , psaM , psbM , psbK , psbA , and psb30 in B481-SD treated with 3.2 mg L −1 nZVIs compared to the untreated control, as indicated by a Log2 fold increase between 1.1X and 1.7X (Fig. ). These findings indicate the impact of nZVIs on F. diplosiphon PSI and PSII, enhancing the photosynthetic capacity by upregulating the target genes. This effect may be caused by the uptake of zerovalent iron molecules by the cells, which can be involved as a cofactor in enzymatic pathways to enhance photosynthesis . Similarly, a study by Zhu et al. reported that Chlorella pyrenoidosa treated with 15 mg L −1 of TiO 2 nanoparticles increased PSI and PSII gene expression, including psaI , psbA , psbM , and psbK , with a Log2 fold change between 3.56 and 5.51. Moreover, given that nZVI produces Fe(II) through oxidation , both nZVI and iron oxides could be responsible for the growth and physiological responses of treated cells. The Fe(II) generated can further participate in redox reactions, facilitating electron transfer processes crucial for photosynthesis and other metabolic activities . In addition, phycobilisomes and carotenoids in cyanobacteria absorb and transfer excitation energy with high quantum efficiency to photosystem II and I in the photosynthetic lamellae . Comparison of the pigment-coding genes cpeA , apcD , cpcA , and apcA2 in 3.2 mg L −1 nZVIs and untreated B481-SD showed a significant upregulation only in the apcA2 gene (Fig. ). In contrast, B481-WT treated with 3.2 mg L −1 showed upregulation of both apcD (Log2 Fc 1.1) and apcA2 genes (Log2 Fc 1.86) (Table B). The upregulation of the pigment coding genes in both strains treated with 3.2 mg L −1 nZVIs indicated cellular protection against nanoparticle-associated cellular stress. In a similar study by Ruan et al. , Synechocystis sp . PCC 6803 exposed to 0.25 mg L −1 Cadmium nanoparticles for 72 h, upregulation of allophycocyanin subunit alpha 2 and phycocyanin alpha subunit compared to the untreated control group due to a 3–4 fold increase in ROS was observed. Although nZVIs can be beneficial for gene expression studies related to photosynthesis, pigment production, and lipid synthesis, one of the most important mechanisms underlying these effects is oxidative stress . The generation of ROS through oxidative stress can enhance these processes by promoting cellular activities and metabolic pathways . However, maintaining a delicate balance between beneficial oxidative stress and detrimental oxidative damage is crucial for cellular survival. Therefore, the regulation of ROS levels is essential to harness the positive effects of nZVIs while preventing potential negative impacts on cell health. Antioxidant systems play a vital role in this regulation by scavenging excess ROS and protecting cellular components from oxidative damage. Interestingly, the Log2 FC for the glutathione reductase gene in B481-SD treated with 3.2 mg L −1 nZVIs was 1.12 fold times higher compared to the untreated control (Fig. ). As an essential component of the antioxidant defense system in both cyanobacteria and plants, this gene has been reported to maintain cellular redox homeostasis by reducing glutathione disulfide to the sulfhydryl form antioxidant . Another notable upregulated gene enhancing antioxidant capacity in nZVI-treated B481-SD was peroxiredoxin (Log2 FC 1.7), which is well-known for maintaining redox homeostasis in cyanobacteria . This is supported by a study on Anabaena PCC 7120 , which demonstrated that exposure to H 2 O 2 could induce overexpression of peroxiredoxin Q proteins, thereby reducing oxidative stress in intact photosynthetic pigments . Thus, the better survival rate of nZVI-treated B481-SD compared to B481-WT could be attributed to the elevated expression of genes coding for antioxidant enzymes. In addition, differential expression of multiple iron transmembrane transporter protein genes was observed in both strains treated with nZVIs. While the genes coding for iron uptake transporter deferrochelatase, TonB-dependent siderophore receptor, and the iron uptake system protein Efeo were down-regulated by Log2 fold changes ranging from 3.4 to 4.2 in B481-SD, expression of these genes were not significant ( p > 0.05) in B481-WT (Fig. ). It is well known that several iron-uptake channels identified in cyanobacteria enable their survival in environments with scarce iron supply , . However, maintaining intracellular iron levels is pivotal for cellular homeostasis, and the iron channels located on the cellular membrane in cyanobacteria play a crucial role in ensuring this vital balance. Thus, it is possible that genes encoding these specific channels could be downregulated as a result of negative regulation systems due to the increased intracellular iron levels. Consequently, we hypothesize that B481-SD has a delicate mechanism to maintain intracellular iron homeostasis in contrast to B481-WT. Besides, the regulation of other ion channels, such as potassium ions (K + ), can be vital considering the impact of nZVIs on total intracellular ion balance. A low concentration of K + was reported to increase cellular survival , and various cyanobacterial cells such as Microcystis were found to be much more sensitive to K + than other cations . Additionally, we observed that the potassium-transporting ATPase subunit C and the potassium-transporting ATPase subunit KdpA in B481-SD were downregulated by Log2 fold changes of 2.2 and 7.4, respectively, when compared to the untreated culture (Fig. ). Given the level of transcriptome changes observed in the present study, we infer that a number of genes were differentially regulated by exposure of F. diplosiphon to nZVIs. F. diplosiphon B481-WT and B481-SD Quantitative analysis by gravimetry offers highly precise and reproducible measurements of total lipid content in various organisms, including cyanobacteria and microalgae , . Comparison of total extracted lipids in B481-SD treated with 3.2 and 25.6 mg L −1 nZVIs revealed significantly higher ( p < 0.05) total lipid content compared to B481-WT at similar concentrations (Fig. ). In addition to the cultures treated with nZVIs at both the concentrations, total lipid content in B481-SD was significantly higher than lipids in the untreated B481-WT (Fig. ). These findings suggest that overexpression of the sterol desaturase gene in B481-SD could have enhanced lipid production by catalyzing an intermediate step in the synthesis of major sterols, in addition to influencing cellular signaling and membrane lipid distribution . In addition, enhanced tolerance to stressors such as β-lactam (ampicillin, cefotaxime) and aminoglycoside (kanamycin) antibiotics in B481-SD compared to the B481-WT strain has been reported . FAMEs in optimal and sub-optimal nZVI concentrations In addition to the total lipid content, we analyzed FAME composition, which is a crucial indicator in determining cyanobacterial lipid quality. As the most abundant type of FAMEs in F. diplosiphon , we observed hexadecanoic acid methyl ester (C16:0) to comprise 66.4% and 53.3% of the total FAMEs in the untreated B481-WT and B481-SD strains respectively. In B481-WT, the percentage of hexadecanoic acid was lower than the untreated control, at 52.6% and 51.9% in 3.2 mg L⁻¹ and 25.6 mg L⁻¹ nZVIs, respectively. The alteration in the predominant saturated fatty acid profile of B481-WT could be attributed to its vulnerability to nZVIs, which could have altered the FAMEs. It should be noted that the heightened sensitivity in B481-WT is possible due to the lack of the overexpressed sterol desaturase gene, unlike B481-SD. In a similar study by Fazelian et al. , a significant decrease in the percentage of hexadecanoic acid methyl ester (C16:0), from 22.84% in the untreated control to 13.73% in Nannochloropsis oculata treated with Fe 2 O 3 -NPs was reported. However, we observed the highest percentage of C16:0 in B481-SD at 3.2 mg L −1 nZVI concentration (55.71%) compared to the untreated control (52.47%) and 25.6 mg L −1 nZVIs (51.78%) (Fig. ). Since hexadecanoic acid is present in cyanobacterial/microalgal strains ranging from 23 to 43% and the composition of fatty acids is influenced by several parameters , , it is not surprising to observe variations in C16:0 between the two strains tested in this study. Thus, alterations in fatty acid saturation can influence the resilience of the species to abiotic stressors such as nZVIs. Similar observations have been reported in other cyanobacterial species, where an increase in unsaturated fatty acid content occurred as an adaptive response for maintaining membrane fluidity under stress conditions and is attributed to the lower melting point of unsaturated fatty acids compared to saturated fatty acids . This activation could serve as a protective mechanism, where elevated SFAs and MUFAs stabilize cell membranes against oxidative stress induced by nZVIs. In addition, we observed a 1.3X increase in unsaturated fatty acids in untreated B481-SD compared to the untreated B481-WT (Table ). These results are in accordance with a previous finding where overexpression of the sterol desaturase gene resulted in higher unsaturated fatty acids, possibly due to enhanced intracellular signal activation . In addition, the genes encoding acyl-CoA desaturase (2.7X) and fatty acid desaturase (1.2X) in B481-SD were significantly enhanced compared to untreated B481-WT (Table A). Our results on lipid profiling revealed a ~ 1.5X increase in monounsaturated fatty acids in B481-WT treated with 3.2 and 25.6 mg L −1 nZVIs compared to the untreated control, while no significant changes were observed in B481-SD (Table ). Interestingly, we noted ~ 1.4X more MUFAs in the untreated B481-SD compared to untreated B481-WT (Table ). These results indicate that nZVIs at both 3.2 and 25.6 mg L −1 concentrations could significantly impact MUFA production in B481-WT; however, in comparison to the untreated cultures, B481-SD (10.67%) exhibited higher MUFAs than B481-WT (7.61%), possibly due to alterations in lipid saturation linked to sterol desaturase. In a similar study by Lu et al. , exposure of Chlamydomonas sp. to isobutyl cyanoacrylate nanoparticles resulted in increased MUFAs, including C18: 1. These changes were attributed to intracellular stress-associated alterations in cyanobacterial lipid profile distribution. Additionally, a strong connection between antioxidant activity and MUFAs was demonstrated in a comparative study, in which Gloeocapsa sp. had the highest MUFA content (27.01 ± 0.15) corresponding to the highest antioxidant DPPH activity (88.84 ± 10.26), surpassing Chroococcidiopsis sp . (19.71 ± 0.93) and Merismopedia sp . (27.32 ± 2.83) . These data support our findings that the increased MUFA levels observed in B481-WT and B481-SD treated with 3.2 mg L −1 nZVIs relate to the antioxidant-related benefits of MUFAs and improved oxidative stress resilience. Further, our findings revealed that 61.2 to 80.5% of the extracted FAMEs consisted of SFAs (Table ). It was interesting to note fewer SFAs in B481-WT treated with 3.2 (62.82%) and 25.6 (66.05%) mg L −1 nZVIs compared to the untreated control (78.63%). In a similar study on Nannochloropsis oculata treated with Fe 2 O 3 -NPs, an 8.69% decrease in SFAs compared to the untreated control was reported . However, we observed 65.72% and 64.77% SFAs in B481-SD treated with 3.2 and 25.6 mg L −1 nZVIs compared to 61.18% in the control, suggesting the impact of the overexpressed sterol desaturase gene on B481-SD fatty acid composition. It is important to note the role and benefits of SFAs for industrial applications, including their use in renewable energy production. Despite their poor stability at low temperatures, SFAs are essential constituents of FAMEs like the MUFAs, contributing to high combustion properties such as higher cetane number and greater calorific value . Thus, a balanced enhancement of SFAs and MUFAs in response to nZVI treatment is crucial for achieving optimal bioproducts. As energy-dense sources, alkanes represent another vital group of key infrastructure-compatible biofuel components synthesized by cyanobacteria. Our findings indicated significantly high ( p < 0.05) alkane content of 1.4X in B81-WT and 1.5X in B481-SD strains treated with 25.6 mg L −1 nZVIs (Fig. ). The observed increase in alkane content can be attributed to the potential role of nZVIs in altering the activity of enzymes such as acyl-ACP reductase and aldehyde deformylating oxygenase, which are key in converting fatty acyl-ACP to alkanes, thus boosting the overall energy content . This increase could be attributed to modulations in redox balance, which aid in regulating cyclic electron flow and suppressing ROS , thus enabling the organism to withstand stress while maintaining optimal growth and productivity. Similar findings were observed in a previous study in F. diplosiphon B481-SD treated with 3.2 mg L −1 nZVIs and 0.8 mg L −1 ampicillin, which resulted in a 2–4 fold augmentation in alkanes compared to the untreated control . Furthermore, in a study by Schirmer et al. , a total of 17 genes were identified by subtractive genome analysis in alkane-producing cyanobacteria, while these genes were notably absent in non-alkane-producing species. Of the seventeen, two genes (AAR and ADO), were found to code for crucial enzymes (acyl-ACP reductase and aldehyde-deformylating oxygenase), which led to the discovery of the alkane biosynthesis pathway using fatty acyl ACP as its substrate. F. diplosiphon To gain insight into the transcriptome changes in F. diplosiphon treated with 3.2 mg L −1 nZVIs, we compared the differential gene expression of treated cells to the untreated control. All comparisons were made using non-redundant lists of genes after the removal of gene IDs without an entire number. When more than one gene (pseudogenes) was recognized, one was selected, and in the case of redundant gene ID numbers, the one that was most altered was chosen. Our results indicated specific cellular functions to be regulated in nZVI-treated F. diplosiphon , in addition to the function of genes and the molecular mechanisms underlying specific biological processes. The reads from Tolypothrix sp. PCC 7601 were aligned to the F. diplosiphon reference genome and used as a reference. Using the methodology and filters outlined above, we generated lists of changes in 2-fold altered genes. In total, about 30 million raw paired-end reads were generated for each sample, with an average of 29,625,787 for B481-WT and 31,608,579 for B481-SD. A comparison of expressed genes in the two strains revealed a higher number of genes in the F. diplosiphon B481-SD. As illustrated in the Venn diagram (Fig. ), a total of 715 genes were uniquely expressed in B481-WT treated with 3.2 mg L −1 nZVIs, 24 genes in the untreated control, and 7405 genes were commonly expressed. On the contrary, strain B481-SD treated with 3.2 mg L −1 nZVIs revealed 162 genes while 219 genes were detected in the control; however, 7105 genes were commonly expressed (Fig. ). Analysis of differentially expressed genes (DEG) Log2 fold change of B481-WT treated with 3.2 mg L −1 nZVIs revealed 112 up-regulated, 1539 down-regulated genes, and 6576 unaltered genes compared to the untreated control (Fig. ). Interestingly, we observed fewer downregulated but more upregulated genes in F. diplosiphon B481-SD, in which Log2 fold changes in 3.2 mg L −1 nZVI-treated cells vs. B481-SD control indicated 863 down-regulated genes and 948 up-regulated genes as depicted in the differential gene count and volcano plot (Fig. ). These results suggest that iron levels are pivotal to ensure the balance in genetic expression changes. Alternatively, it is also possible that iron limitation can be crucial in both the up-regulation and down-regulation of genes. A previous study has shown that severe iron limitation caused the downregulation of 36 genes and the upregulation of 35 genes, while mild iron limitation resulted in the downregulation of 27 genes and the upregulation of 62 genes in Synechococcus sp. PCC7002 . The gene ontology (GO) enrichment histogram of co-upregulated and downregulated genes is shown in Fig. . In the three groups (BP, blue; MF, red; and CC, green), the EC score was calculated for each GO category and compared with that of random sampling (purple) to estimate the statistical significance level. The histogram intuitively reflects the distribution of the number of DEGs in GO terms enriched in BP, CC, and MF. The ordinate is the enriched GO term, and the abscissa is the number of differentially expressed genes in the term. The application of nZVIs can significantly influence the photosynthetic efficiency of cyanobacteria by interacting with photosystem I (PSI) and photosystem II (PSII). A previous study has shown that nZVIs can enhance electron transport rates in PSI and PSII , improving light absorption and energy conversion. Furthermore, experiments have demonstrated that nanoparticles, by eliminating unwanted wavelengths, can increase the target wavelengths, thereby promoting pigment content and enhancing the expression of photosynthetic genes . In this study, we observed an upregulation of the genes involved in photosynthesis, including psaI , psaX , psaM , psbM , psbK , psbA , and psb30 in B481-SD treated with 3.2 mg L −1 nZVIs compared to the untreated control, as indicated by a Log2 fold increase between 1.1X and 1.7X (Fig. ). These findings indicate the impact of nZVIs on F. diplosiphon PSI and PSII, enhancing the photosynthetic capacity by upregulating the target genes. This effect may be caused by the uptake of zerovalent iron molecules by the cells, which can be involved as a cofactor in enzymatic pathways to enhance photosynthesis . Similarly, a study by Zhu et al. reported that Chlorella pyrenoidosa treated with 15 mg L −1 of TiO 2 nanoparticles increased PSI and PSII gene expression, including psaI , psbA , psbM , and psbK , with a Log2 fold change between 3.56 and 5.51. Moreover, given that nZVI produces Fe(II) through oxidation , both nZVI and iron oxides could be responsible for the growth and physiological responses of treated cells. The Fe(II) generated can further participate in redox reactions, facilitating electron transfer processes crucial for photosynthesis and other metabolic activities . In addition, phycobilisomes and carotenoids in cyanobacteria absorb and transfer excitation energy with high quantum efficiency to photosystem II and I in the photosynthetic lamellae . Comparison of the pigment-coding genes cpeA , apcD , cpcA , and apcA2 in 3.2 mg L −1 nZVIs and untreated B481-SD showed a significant upregulation only in the apcA2 gene (Fig. ). In contrast, B481-WT treated with 3.2 mg L −1 showed upregulation of both apcD (Log2 Fc 1.1) and apcA2 genes (Log2 Fc 1.86) (Table B). The upregulation of the pigment coding genes in both strains treated with 3.2 mg L −1 nZVIs indicated cellular protection against nanoparticle-associated cellular stress. In a similar study by Ruan et al. , Synechocystis sp . PCC 6803 exposed to 0.25 mg L −1 Cadmium nanoparticles for 72 h, upregulation of allophycocyanin subunit alpha 2 and phycocyanin alpha subunit compared to the untreated control group due to a 3–4 fold increase in ROS was observed. Although nZVIs can be beneficial for gene expression studies related to photosynthesis, pigment production, and lipid synthesis, one of the most important mechanisms underlying these effects is oxidative stress . The generation of ROS through oxidative stress can enhance these processes by promoting cellular activities and metabolic pathways . However, maintaining a delicate balance between beneficial oxidative stress and detrimental oxidative damage is crucial for cellular survival. Therefore, the regulation of ROS levels is essential to harness the positive effects of nZVIs while preventing potential negative impacts on cell health. Antioxidant systems play a vital role in this regulation by scavenging excess ROS and protecting cellular components from oxidative damage. Interestingly, the Log2 FC for the glutathione reductase gene in B481-SD treated with 3.2 mg L −1 nZVIs was 1.12 fold times higher compared to the untreated control (Fig. ). As an essential component of the antioxidant defense system in both cyanobacteria and plants, this gene has been reported to maintain cellular redox homeostasis by reducing glutathione disulfide to the sulfhydryl form antioxidant . Another notable upregulated gene enhancing antioxidant capacity in nZVI-treated B481-SD was peroxiredoxin (Log2 FC 1.7), which is well-known for maintaining redox homeostasis in cyanobacteria . This is supported by a study on Anabaena PCC 7120 , which demonstrated that exposure to H 2 O 2 could induce overexpression of peroxiredoxin Q proteins, thereby reducing oxidative stress in intact photosynthetic pigments . Thus, the better survival rate of nZVI-treated B481-SD compared to B481-WT could be attributed to the elevated expression of genes coding for antioxidant enzymes. In addition, differential expression of multiple iron transmembrane transporter protein genes was observed in both strains treated with nZVIs. While the genes coding for iron uptake transporter deferrochelatase, TonB-dependent siderophore receptor, and the iron uptake system protein Efeo were down-regulated by Log2 fold changes ranging from 3.4 to 4.2 in B481-SD, expression of these genes were not significant ( p > 0.05) in B481-WT (Fig. ). It is well known that several iron-uptake channels identified in cyanobacteria enable their survival in environments with scarce iron supply , . However, maintaining intracellular iron levels is pivotal for cellular homeostasis, and the iron channels located on the cellular membrane in cyanobacteria play a crucial role in ensuring this vital balance. Thus, it is possible that genes encoding these specific channels could be downregulated as a result of negative regulation systems due to the increased intracellular iron levels. Consequently, we hypothesize that B481-SD has a delicate mechanism to maintain intracellular iron homeostasis in contrast to B481-WT. Besides, the regulation of other ion channels, such as potassium ions (K + ), can be vital considering the impact of nZVIs on total intracellular ion balance. A low concentration of K + was reported to increase cellular survival , and various cyanobacterial cells such as Microcystis were found to be much more sensitive to K + than other cations . Additionally, we observed that the potassium-transporting ATPase subunit C and the potassium-transporting ATPase subunit KdpA in B481-SD were downregulated by Log2 fold changes of 2.2 and 7.4, respectively, when compared to the untreated culture (Fig. ). Given the level of transcriptome changes observed in the present study, we infer that a number of genes were differentially regulated by exposure of F. diplosiphon to nZVIs. The present study demonstrated significant alterations in the total lipid content of both the wild-type and transformant strains of F. diplosiphon treated with nZVIs. This effect was corroborated by the increased percentages of monounsaturated fatty acids and the distinctive FAME profiles observed, particularly hexadecanoic acid. Transcriptome analysis further illuminated the regulatory shifts in gene expression associated with iron uptake, photosynthetic activity, and fatty acid metabolism, underscoring the complex interplay between metabolic pathways and nZVIs as external stressors. These findings not only deepen our understanding of the metabolic alterations in F. diplosiphon as a response to nZVI exposure, but also highlight the improved potential of the transformant strain for biofuel production.
Does Random Treatment Assignment Cause Harm to Research Participants?
74d7925d-bce0-4fb3-a283-f42c930813a2
1470665
Gynaecology[mh]
Despite widespread reliance on randomized clinical trials (RCTs), and claims that they represent the “gold standard” for assessing treatment efficacy, ethical concern has been raised about the impact of RCTs on participants [ – ]. Specifically, there is a perception that individual patients are likely to have better outcomes when treatment decisions are based on physicians' clinical judgment, rather than random assignment . It has been claimed that by foregoing individualized treatment assignment, the process of choosing research participants' treatments by random assignment leads to an “inevitable compromise of personal care in the service of obtaining valid research results” . Further, physician and patient concerns about random treatment assignment are among the most frequently cited reasons for refusal to enroll in RCTs [ – ]. While some commentators focus on the specific impact of random treatment assignment, others have investigated the broader topic of differences in clinical outcomes between research participants and “real” patients in the community setting. Some studies have suggested that research participation may be associated with improved clinical outcomes [ – ]. These data have led some to recommend trial participation as a means to better treatment . For instance, the National Comprehensive Cancer Network's clinical practice guidelines in oncology state that “the best management for any cancer patient is in a clinical trial” . Yet these conclusions are not based on strong evidence . In particular, comparisons of research participants versus non-participants often include non-participants who do not meet trial eligibility criteria . Because of stringent eligibility criteria, trial participants tend to be younger and healthier than non-participants in the community [ – ]. Trial participation may also be the only means of access to some therapies: if the investigational therapy is available only in the research setting and turns out to be superior to existing therapies, trial participants who were allocated to the newer agent would be more likely to benefit. Further, the supportive clinical care that participants receive as part of research in resource-rich settings associated with some clinical trials may also be associated with superior outcomes. Recognizing these flaws in the existing data, a recent review of the literature called for more studies that assess the impact of participation in clinical research on patient outcomes in a methodologically rigorous manner . It is particularly timely to disentangle the issues surrounding the effect of research participation on patients. There has recently been increased emphasis on designing trials that compare commercially available, clinically relevant alternatives [ – ]. Some authors have advocated substantial increases in funding of “pragmatic” trials enrolling large numbers of patients in community practice settings . Additionally, Medicare's policy has recently been modified so as to provide reimbursement for some new therapies only if patients receive them in the setting of a clinical trial . Unlike in the previous paradigm, which viewed randomized trials as a tool to evaluate the efficacy of novel therapeutic agents, these innovations will likely result in many more patients encountering the decision of trial enrollment in the setting of routine clinical care. Prospective participants who are asked to participate in these pragmatic trials will have to decide whether to receive therapy that was selected via randomization or to select treatment with the input of their clinician. Given the increased emphasis on recruiting large numbers of patients into trials, it is important to consider the question of enrollment from the perspective of patients who meet all eligibility criteria and are asked to enroll. If they agree to have their treatment selected at random, rather than by their clinicians or themselves, will they be more likely to experience adverse outcomes? We sought to answer this question by examining the potential risks associated with random treatment allocation, rather than delineating differences between trial participants and non-participants . While numerous studies have demonstrated the differences between trial participants and their counterparts in the community, few have focused specifically on the impact of random treatment assignment. Specifically, we were interested in the group of patients who were eligible for participation in an RCT but could also receive either of the therapies offered in the RCT even if they refused to enroll. We conducted a systematic review of published randomized controlled trials to compare the clinical outcomes of randomized patients and nonrandomized patients who were eligible for the same trial, were cared for in the same clinical setting, and received the same agents available to trial participants. Selection of Studies We conducted a Medline search to identify studies that (1) included only patients who were eligible for trial participation, (2) included only patients who were cared for at the same institutions and at the same time in which the randomized trial was recruiting, (3) allowed non-participants access to the agents used in the trial, (4) provided outcome data for both trial participants and eligible non-participants, and (5) recruited all participants in a similar manner. The Medline search employed 23 unique combinations of terms and strings of terms (see ). We focused a significant portion of our Medline search on identifying studies that met our definition of comprehensive cohort study design (see for terms and phrases). The comprehensive cohort study design, also called the partially randomized patient preference trial design, offers eligible research participants the chance to refuse randomization but receive either the study intervention or the control intervention per study protocol . In addition, we used the references of relevant manuscripts, authors' own bibliographic libraries, and Web of Science to identify frequently cited researchers and papers. The Medline search identified 1,505 studies; the Web of Science search identified 371 studies. Of these 1,876 studies, the titles of 1,555 were identified by the two reviewers as potentially appropriate for inclusion in the current analysis. The abstracts of these 1,555 studies were assessed by two authors for appropriate content and relevant methodology. The full texts of 48 potentially suitable manuscripts were retrieved and assessed. Of these, 25 studies met the eligibility criteria. Data Analysis An explicit abstraction instrument was used to obtain baseline characteristics of the RCT participants and eligible non-participants and primary clinical outcomes. Outcomes were restricted to the primary outcome listed in each manuscript; if more than one primary outcome was specified, the first one listed was used. To compare outcomes across studies, all study outcomes were standardized to “adverse” outcomes, e.g., for studies that reported survival, we converted probability of survival to probability of death. Most of the studies had dichotomous outcomes that enabled the calculation of odds ratios; those that did not were analyzed separately. In the two studies in which outcomes were expressed only as rates rather than as frequency counts, the stated proportion of people in each group who experienced the study outcome was multiplied by the number at baseline to estimate the frequency . In one study, non-participants were able to select from three treatment options, only two of which were part of the RCT. For this study, we included data only from non-participants who received one of the two treatments that were part of the RCT . Because the relation between trial participation and clinical outcomes might be confounded by differences in baseline health status, we categorized the studies into three mutually exclusive groups: those in which the RCT participants were, overall, less healthy than eligible non-participants at baseline, those in which there was no clear difference in baseline health status, and those in which RCT participants were, overall, healthier at baseline. Two clinicians, using an implicit schema involving examination of baseline clinical and demographic characteristics of randomized and nonrandomized patients, independently categorized each study according to whether there was a balance of important prognostic factors between groups. Disagreements were resolved by consensus. The odds ratios of experiencing the primary clinical outcome for RCT participants versus eligible non-participants were calculated using SAS 8.1 . A Breslow–Day chi-square statistic indicated that it would be inappropriate to aggregate the results of studies with dichotomous outcomes because of heterogeneity. Thus, the outcomes are presented simply by study, according to baseline differences. We conducted a Medline search to identify studies that (1) included only patients who were eligible for trial participation, (2) included only patients who were cared for at the same institutions and at the same time in which the randomized trial was recruiting, (3) allowed non-participants access to the agents used in the trial, (4) provided outcome data for both trial participants and eligible non-participants, and (5) recruited all participants in a similar manner. The Medline search employed 23 unique combinations of terms and strings of terms (see ). We focused a significant portion of our Medline search on identifying studies that met our definition of comprehensive cohort study design (see for terms and phrases). The comprehensive cohort study design, also called the partially randomized patient preference trial design, offers eligible research participants the chance to refuse randomization but receive either the study intervention or the control intervention per study protocol . In addition, we used the references of relevant manuscripts, authors' own bibliographic libraries, and Web of Science to identify frequently cited researchers and papers. The Medline search identified 1,505 studies; the Web of Science search identified 371 studies. Of these 1,876 studies, the titles of 1,555 were identified by the two reviewers as potentially appropriate for inclusion in the current analysis. The abstracts of these 1,555 studies were assessed by two authors for appropriate content and relevant methodology. The full texts of 48 potentially suitable manuscripts were retrieved and assessed. Of these, 25 studies met the eligibility criteria. An explicit abstraction instrument was used to obtain baseline characteristics of the RCT participants and eligible non-participants and primary clinical outcomes. Outcomes were restricted to the primary outcome listed in each manuscript; if more than one primary outcome was specified, the first one listed was used. To compare outcomes across studies, all study outcomes were standardized to “adverse” outcomes, e.g., for studies that reported survival, we converted probability of survival to probability of death. Most of the studies had dichotomous outcomes that enabled the calculation of odds ratios; those that did not were analyzed separately. In the two studies in which outcomes were expressed only as rates rather than as frequency counts, the stated proportion of people in each group who experienced the study outcome was multiplied by the number at baseline to estimate the frequency . In one study, non-participants were able to select from three treatment options, only two of which were part of the RCT. For this study, we included data only from non-participants who received one of the two treatments that were part of the RCT . Because the relation between trial participation and clinical outcomes might be confounded by differences in baseline health status, we categorized the studies into three mutually exclusive groups: those in which the RCT participants were, overall, less healthy than eligible non-participants at baseline, those in which there was no clear difference in baseline health status, and those in which RCT participants were, overall, healthier at baseline. Two clinicians, using an implicit schema involving examination of baseline clinical and demographic characteristics of randomized and nonrandomized patients, independently categorized each study according to whether there was a balance of important prognostic factors between groups. Disagreements were resolved by consensus. The odds ratios of experiencing the primary clinical outcome for RCT participants versus eligible non-participants were calculated using SAS 8.1 . A Breslow–Day chi-square statistic indicated that it would be inappropriate to aggregate the results of studies with dichotomous outcomes because of heterogeneity. Thus, the outcomes are presented simply by study, according to baseline differences. A total of 25 articles met the inclusion criteria and were selected for data abstraction. The dates of publication ranged from 1984 to 2002; the majority (80%) were published in 1990 or later. There was a broad range of conditions under investigation, and types of studies, including surgical trials, drug trials, and trials of counseling. The most common specialties represented were oncology (six studies), cardiovascular disease (five studies), and obstetrics/gynecology (five studies). The total number of eligible patients across all studies was 17,934 (range: 79 to 3,610); the proportion of eligible patients who agreed to be randomized ranged from 29% to 89% (average: 45 %; median: 47 %). The primary outcomes of interest varied across studies; the most common were mortality (9/25), acceptability of treatment (5/25), and proportion of time or number of days with a given condition (2/25). Baseline Characteristics shows the study intervention and enrollment data for all 25 studies, categorized according to baseline clinical and sociodemographic characteristics. There were no clear differences in baseline health status between RCT participants and eligible non-participants in 17 studies. In one study, RCT participants were healthier than eligible non-participants at baseline, and in seven studies RCT participants were less healthy at baseline than eligible non-participants. There was no significant relation between the proportion of eligible patients who agreed to be randomized and the occurrence of differences in baseline health status of randomized versus nonrandomized patients. The mean proportion of eligible patients who agreed to be randomized in the seven studies categorized as “RCT patients less healthy” was 48.9%, while the mean in the 17 studies with no baseline differences was 43.5% ( p = 0.61). Differences in clinical sociodemographic characteristics between groups also varied in magnitude and significance. For instance, in the Bypass Angioplasty Revascularization Investigation of angioplasty versus coronary artery bypass graft, RCT participants were significantly more likely than non-participants to have a history of myocardial infarction (55% versus 51%), heart failure (9% versus 5%), or diabetes (19% versus 17%) . Significant differences in race were found in two studies: the study by Marcus and colleagues included more non-whites in the eligible, nonrandomized group (10% versus 24%; p = 0.008), and the Bypass Angioplasty Revascularization Investigation included more non-whites in the RCT group (10% versus 6%, p < 0.001) [ , , ]. Outcomes In 22 of the 25 studies (88%), there were no significant differences in clinical outcomes between patients whose treatment was selected by randomized allocation and those whose treatment was selected on the basis of clinical judgment and/or patient preferences ( ; ). There were no significant differences in clinical outcomes between randomized and nonrandomized patients in 15 of the 17 studies (88%) in which there were no clear baseline differences in health or sociodemographic status. Similarly, there were no significant differences in clinical outcomes between randomized and nonrandomized patients in six of the seven studies in which RCT participants were sicker than non-participants at baseline (86%; chi-square test, p > 0.05 for comparison with the “no clear baseline differences” group). In Feit et al.'s analysis of the data from the Bypass Angioplasty Revascularization Investigation , randomized patients were more likely to have risk factors for adverse outcomes at baseline: they were more likely to have congestive heart failure, prior myocardial infarction, or diabetes, and were more likely to be non-white and less educated. The 7-y mortality in the randomized group was 17.3%, compared with 14.5% in the nonrandomized group (relative risk: 1.19; 95% confidence interval [CI]: 1.03, 1.39) . In Melchart et al.'s study of acupuncture versus midazolam as pretreatment for gastroscopy , there were no significant differences in baseline health status between randomized and nonrandomized groups. Randomized patients were more likely than nonrandomized patients to state that they would not undergo the same treatment again (34.6% versus 15.3%; relative risk: 2.27; 95% CI: 1.06, 4.84). Similarly, in Blichert-Toft's study of mastectomy versus breast-conserving surgery for breast cancer , randomized patients were more likely than nonrandomized patients to experience the outcome of cancer recurrence (13.7% versus 6.6%), although the difference was of borderline significance (relative risk: 2.08; 95% CI: 1.07, 4.02). In the single study in which randomized patients were categorized as having a better baseline health status than nonrandomized patients, there was a nonsignificant trend towards the randomized patients being less likely to experience disease recurrence or death (odds ratio for randomized versus nonrandomized: 0.35; 95% CI: 0.12, 1.01) . shows the study intervention and enrollment data for all 25 studies, categorized according to baseline clinical and sociodemographic characteristics. There were no clear differences in baseline health status between RCT participants and eligible non-participants in 17 studies. In one study, RCT participants were healthier than eligible non-participants at baseline, and in seven studies RCT participants were less healthy at baseline than eligible non-participants. There was no significant relation between the proportion of eligible patients who agreed to be randomized and the occurrence of differences in baseline health status of randomized versus nonrandomized patients. The mean proportion of eligible patients who agreed to be randomized in the seven studies categorized as “RCT patients less healthy” was 48.9%, while the mean in the 17 studies with no baseline differences was 43.5% ( p = 0.61). Differences in clinical sociodemographic characteristics between groups also varied in magnitude and significance. For instance, in the Bypass Angioplasty Revascularization Investigation of angioplasty versus coronary artery bypass graft, RCT participants were significantly more likely than non-participants to have a history of myocardial infarction (55% versus 51%), heart failure (9% versus 5%), or diabetes (19% versus 17%) . Significant differences in race were found in two studies: the study by Marcus and colleagues included more non-whites in the eligible, nonrandomized group (10% versus 24%; p = 0.008), and the Bypass Angioplasty Revascularization Investigation included more non-whites in the RCT group (10% versus 6%, p < 0.001) [ , , ]. In 22 of the 25 studies (88%), there were no significant differences in clinical outcomes between patients whose treatment was selected by randomized allocation and those whose treatment was selected on the basis of clinical judgment and/or patient preferences ( ; ). There were no significant differences in clinical outcomes between randomized and nonrandomized patients in 15 of the 17 studies (88%) in which there were no clear baseline differences in health or sociodemographic status. Similarly, there were no significant differences in clinical outcomes between randomized and nonrandomized patients in six of the seven studies in which RCT participants were sicker than non-participants at baseline (86%; chi-square test, p > 0.05 for comparison with the “no clear baseline differences” group). In Feit et al.'s analysis of the data from the Bypass Angioplasty Revascularization Investigation , randomized patients were more likely to have risk factors for adverse outcomes at baseline: they were more likely to have congestive heart failure, prior myocardial infarction, or diabetes, and were more likely to be non-white and less educated. The 7-y mortality in the randomized group was 17.3%, compared with 14.5% in the nonrandomized group (relative risk: 1.19; 95% confidence interval [CI]: 1.03, 1.39) . In Melchart et al.'s study of acupuncture versus midazolam as pretreatment for gastroscopy , there were no significant differences in baseline health status between randomized and nonrandomized groups. Randomized patients were more likely than nonrandomized patients to state that they would not undergo the same treatment again (34.6% versus 15.3%; relative risk: 2.27; 95% CI: 1.06, 4.84). Similarly, in Blichert-Toft's study of mastectomy versus breast-conserving surgery for breast cancer , randomized patients were more likely than nonrandomized patients to experience the outcome of cancer recurrence (13.7% versus 6.6%), although the difference was of borderline significance (relative risk: 2.08; 95% CI: 1.07, 4.02). In the single study in which randomized patients were categorized as having a better baseline health status than nonrandomized patients, there was a nonsignificant trend towards the randomized patients being less likely to experience disease recurrence or death (odds ratio for randomized versus nonrandomized: 0.35; 95% CI: 0.12, 1.01) . When there are several treatment options available, and there is uncertainty about which one is superior, it is assumed that individualized treatment assignment—in which clinicians consider the health status and preferences of each patient and incorporate them into a recommendation—is more likely to yield desirable outcomes. This is why doctors don't flip coins, and this is also why some may assume that randomization as part of a trial is harmful. In 23 of the 25 published clinical trials that met inclusion criteria, there were no significant differences in the likelihood of experiencing the primary study outcomes between patients whose treatment was determined by random allocation versus those whose treatment was selected on the basis of clinical judgment and/or patient preferences. More importantly, in 15 of the 17 studies in which randomized and nonrandomized patients were classified as having similar health status at baseline, there were no significant differences between these groups in clinical outcomes. These data contradict the perception that random treatment assignment as part of a clinical trial is harmful to research participants. The finding that randomized research participants and non-participants tend to achieve similar clinical outcomes also contradicts prior studies suggesting that trial participation may be associated with superior clinical outcomes [ – ]. Many of the previous studies that reported such a difference failed to account for the numerous differences between clinical care and clinical research that may influence patient outcomes, including the fact that research participants are often younger, healthier, and treated by clinicians with more experience in treating patients with the condition of interest. Specifically, we restricted the present analysis to studies that included only patients who were eligible for RCT participation and had access to similar treatments whether or not they chose to enroll in the RCT. Hence, while our study sample was therefore restricted to a relatively small subset of RCTs, our findings suggest that the purported benefit of trial participation is probably due to baseline differences between participants and non-participants, or to differences in treatments received. All of the studies included in the present analysis allowed access to the experimental therapies to patients who refused trial enrollment. It is unclear whether our results can be generalized to randomized trials that include newer, and potentially more efficacious, therapies that are not available outside the research setting. However, a recent analysis found that only 36% of trials presented at an annual meeting of the American Society of Clinical Oncology yielded “positive” results . These findings contradict the widespread assumption that access to experimental therapies is beneficial [ – ]. Future work should explore whether participation in randomized trials of otherwise unavailable agents is associated with superior clinical outcomes. While our comprehensive and systematic search identified far more manuscripts than prior reviews of this topic that we are aware of, our final sample size is small relative to the number of RCTs conducted annually . As a result, although our findings were consistent across disease entities and different types of intervention, they may not be generalizable. As noted in prior reviews, many of the primary studies did not control for differences in baseline health characteristics . We used an implicit, dual review approach to account for this potential bias, stratifying manuscripts according to baseline differences between trial participants and non-participants. Ideally, future work employing primary data would enable multivariate analysis of patient-level information, to account for important patient characteristics that may affect patient outcomes. The increasing use of electronic medical records represents a tremendous opportunity for establishing longitudinal registry databases to facilitate follow-up of patients who are offered trial enrollment, yet decline. Our results should be interpreted with several considerations in mind. We restricted our analysis to the primary outcomes assessed in the included studies. In particular, many studies assessed the outcome of mortality, and there may have been differences in the probability of other adverse events, satisfaction, or quality of life between RCT participants and non-participants. Similarly, clinical trials may include additional research procedures, such as blood draws and lumbar punctures that do not affect patient outcomes but that pose burdens to participants. Additionally, random assignment refers only to the investigational agent. Even among RCT participants, clinician-investigators generally have some latitude regarding other aspects of care that are administered to their patients and can therefore provide individualized care that consists of interventions that are distinct from the investigational agent. Similarly, clinicians may halt existing treatment for patients who are offered a choice of enrolling in a study. In these instances, if a patient is provided one of the treatment interventions offered in the study—whether selected with randomization or by patient choice—it is possible that the initial treatment may have been superior to either of the treatments under investigation. Further, publication bias might have yielded underestimates of differences between RCT participants and eligible non-participants, as investigators may have been reluctant to report data from the non-participants in their registries if they did not support the generalizability of their RCTs. Finally, there may have been important differences in health status between randomized and nonrandomized patients that were not reported by the investigators. However, given that the vast majority of the study samples in our sample found no difference in health outcome between groups, one would have to invoke a systematic over- or underestimation of health status in the randomized groups across multiple studies in order to instill bias in this synthesis. Numerous studies indicate that RCT participants often fail to understand that their treatments will be determined by random assignment [ , – ]. For example, a recent analysis found that half of parents who decided whether to enroll their children in a leukemia trial did not understand that treatment allocation would be determined by chance . The failure to understand randomization is often regarded as part of a broader phenomenon, termed the “therapeutic misconception,” according to which individuals assume that research treatments are based on physicians' decisions regarding what is best for them . In this context, our findings have important implications for the informed consent process. In addition to explaining randomization, investigators should also explain that, in general, there is little evidence to support that participating in randomized trials is either helpful or harmful. What do our findings say about the impact of clinical judgment and patient preferences on clinical outcomes? Although clinicians and patients may be reluctant to forego clinical decision-making, our data suggest that undergoing randomization, rather than individualized treatment recommendations by clinicians, is not harmful. This conclusion calls into question clinicians' ability to determine which therapy is superior for their patients in the setting of clinical equipoise, i.e., when there is uncertainty in the expert community about which treatment is superior for patients in general . It has also been suggested that some patients who are not randomly assigned to a treatment may achieve a better outcome not because of an objective therapeutic effect, but because they were assigned to the treatment arm they preferred—a logical extension of the placebo effect . To account for this possible “preference effect,” some have called for incorporating patient treatment preferences into the analysis phase of RCTs . Our data provide preliminary evidence that this preference effect does not bias the outcomes of RCTs: patients who received a treatment preferred by themselves or their clinicians did not experience superior outcomes. These findings are consistent with the result of a recent review in which the authors stratified patients according to treatment received and then compared the outcome of patients who were randomized versus those who selected each therapy . A critical barrier to enrolling patients in research studies is the fact that many patients are not even asked to participate . One reason why physicians are reluctant to recruit their own patients is their reluctance to forego individualized treatment decisions for their patients . This reluctance is especially important because physician recommendations are among the strongest predictors of trial enrollment . The current findings suggest that in the setting of clinical equipoise, randomized treatment allocation as part of an RCT is unlikely to be harmful This does not imply that all research is not risky, as the risks and benefits of experimental treatment may vary substantially between studies. However, in the situation in which patients will have access to the treatments that are used in the study setting regardless of whether the patient enrolls, prospective participants and their referring physicians should be reassured: there is no evidence that random treatment assignment leads to worse clinical outcomes. Furthermore, patients who do participate in such research can contribute to the important objective of improving health and well-being for all patients. Protocol S1 Literature Search Keywords and Results (67 KB DOC) Click here for additional data file. Editors' Summary Background. When researchers test a new treatment, they give it to a group of patients. If the test is to be fair and provide useful results, there should also be a control group of patients who are studied in parallel. The patients in the control group receive either a different treatment, a pretend treatment (“a placebo”), or no treatment at all. But how do researchers decide who should be in the treatment group and who should be in the control group? This is an important question because the test would not be fair if, for example, all the individuals in the treatment group were elderly men and the controls were all young women, or if everyone in the treatment group received their treatment in a well-equipped specialist hospital and the controls received care in a local general hospital. Statisticians would say that the results from such studies were “confounded” by the differences between the two groups. Instead, patients should be allocated to treatment or control groups at random. Randomization also has the advantage that it can conceal from the researchers, and from the patients, whether the treatment being given is the new one or an old one or a placebo. This is important because—again for example—researchers might hold strong beliefs about the effectiveness of a new treatment and this bias in its favor might lead them, perhaps only subconsciously, to allocate younger, stronger patients to the treatment group. For these and other reasons, randomized clinical trials (RCTs) are regarded as the “gold standard” in assessing the effectiveness of treatments. Why Was This Study Done? Doctors normally decide on the “best” treatment for an individual patient based on their knowledge and experience. However, if a patient has agreed to be part of an RCT, then their treatment will instead be chosen at random. Some people worry that patients who participate in RCTs may, because their treatment is less “personalized,” have a lower chance of recovery from their illness than similar patients who are not in trials. In contrast, other argue that, particularly if the trial is part of an important research program, being in an RCT is to the patient's advantage. This study aimed to find out whether either of these possibilities is true. What Did the Researchers Do and Find? The researchers conducted a thorough electronic search of medical journals in order to find published RCTs for which information—both before and after treatment—had been recorded not only about the patients who were enrolled in the trials, but also about other patients whose condition made them eligible to participate but who were not actually enrolled. The researchers also decided in advance that they were only interested in such RCTs if the non-enrolled patients had access to the same treatment or treatments that were given to the trial participants. Only 25 RCTs were found that met these requirements. There were nearly 18,000 patients in these studies; overall 45% had received treatment after randomization and 55% had not been randomized. Most of the RCTs were for treatments for cancer, problems of the heart and circulation, and obstetric and gynecological issues. The “clinical outcomes” recorded in the trials varied and included, for example, death/survival, recurrence of cancer, and improvement of hearing. In 22 of these trials, there were no statistically significant differences in clinical outcomes between patients who received random assignment of treatment (i.e., the RCT participants) and those who received individualized treatment assignment (eligible non-participants). In one trial the randomized patients fared better, and in the remaining two the nonrandomized patients had the better outcomes. What Do These Findings Mean? These findings suggest that randomized treatment assignment as part of a clinical trial does not harm research participants, nor does there appear to be an advantage to being randomized in a trial. Additional Information Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030188 . •The James Lind Library has been created to help patients and researchers understand fair tests of treatments in health care by illustrating how fair tests have developed over the centuries • Wikipedia , a free Internet encyclopedia that anyone can edit, has pages on RCTs Background. When researchers test a new treatment, they give it to a group of patients. If the test is to be fair and provide useful results, there should also be a control group of patients who are studied in parallel. The patients in the control group receive either a different treatment, a pretend treatment (“a placebo”), or no treatment at all. But how do researchers decide who should be in the treatment group and who should be in the control group? This is an important question because the test would not be fair if, for example, all the individuals in the treatment group were elderly men and the controls were all young women, or if everyone in the treatment group received their treatment in a well-equipped specialist hospital and the controls received care in a local general hospital. Statisticians would say that the results from such studies were “confounded” by the differences between the two groups. Instead, patients should be allocated to treatment or control groups at random. Randomization also has the advantage that it can conceal from the researchers, and from the patients, whether the treatment being given is the new one or an old one or a placebo. This is important because—again for example—researchers might hold strong beliefs about the effectiveness of a new treatment and this bias in its favor might lead them, perhaps only subconsciously, to allocate younger, stronger patients to the treatment group. For these and other reasons, randomized clinical trials (RCTs) are regarded as the “gold standard” in assessing the effectiveness of treatments. Why Was This Study Done? Doctors normally decide on the “best” treatment for an individual patient based on their knowledge and experience. However, if a patient has agreed to be part of an RCT, then their treatment will instead be chosen at random. Some people worry that patients who participate in RCTs may, because their treatment is less “personalized,” have a lower chance of recovery from their illness than similar patients who are not in trials. In contrast, other argue that, particularly if the trial is part of an important research program, being in an RCT is to the patient's advantage. This study aimed to find out whether either of these possibilities is true. What Did the Researchers Do and Find? The researchers conducted a thorough electronic search of medical journals in order to find published RCTs for which information—both before and after treatment—had been recorded not only about the patients who were enrolled in the trials, but also about other patients whose condition made them eligible to participate but who were not actually enrolled. The researchers also decided in advance that they were only interested in such RCTs if the non-enrolled patients had access to the same treatment or treatments that were given to the trial participants. Only 25 RCTs were found that met these requirements. There were nearly 18,000 patients in these studies; overall 45% had received treatment after randomization and 55% had not been randomized. Most of the RCTs were for treatments for cancer, problems of the heart and circulation, and obstetric and gynecological issues. The “clinical outcomes” recorded in the trials varied and included, for example, death/survival, recurrence of cancer, and improvement of hearing. In 22 of these trials, there were no statistically significant differences in clinical outcomes between patients who received random assignment of treatment (i.e., the RCT participants) and those who received individualized treatment assignment (eligible non-participants). In one trial the randomized patients fared better, and in the remaining two the nonrandomized patients had the better outcomes. What Do These Findings Mean? These findings suggest that randomized treatment assignment as part of a clinical trial does not harm research participants, nor does there appear to be an advantage to being randomized in a trial. Additional Information Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030188 . •The James Lind Library has been created to help patients and researchers understand fair tests of treatments in health care by illustrating how fair tests have developed over the centuries • Wikipedia , a free Internet encyclopedia that anyone can edit, has pages on RCTs When researchers test a new treatment, they give it to a group of patients. If the test is to be fair and provide useful results, there should also be a control group of patients who are studied in parallel. The patients in the control group receive either a different treatment, a pretend treatment (“a placebo”), or no treatment at all. But how do researchers decide who should be in the treatment group and who should be in the control group? This is an important question because the test would not be fair if, for example, all the individuals in the treatment group were elderly men and the controls were all young women, or if everyone in the treatment group received their treatment in a well-equipped specialist hospital and the controls received care in a local general hospital. Statisticians would say that the results from such studies were “confounded” by the differences between the two groups. Instead, patients should be allocated to treatment or control groups at random. Randomization also has the advantage that it can conceal from the researchers, and from the patients, whether the treatment being given is the new one or an old one or a placebo. This is important because—again for example—researchers might hold strong beliefs about the effectiveness of a new treatment and this bias in its favor might lead them, perhaps only subconsciously, to allocate younger, stronger patients to the treatment group. For these and other reasons, randomized clinical trials (RCTs) are regarded as the “gold standard” in assessing the effectiveness of treatments. Doctors normally decide on the “best” treatment for an individual patient based on their knowledge and experience. However, if a patient has agreed to be part of an RCT, then their treatment will instead be chosen at random. Some people worry that patients who participate in RCTs may, because their treatment is less “personalized,” have a lower chance of recovery from their illness than similar patients who are not in trials. In contrast, other argue that, particularly if the trial is part of an important research program, being in an RCT is to the patient's advantage. This study aimed to find out whether either of these possibilities is true. The researchers conducted a thorough electronic search of medical journals in order to find published RCTs for which information—both before and after treatment—had been recorded not only about the patients who were enrolled in the trials, but also about other patients whose condition made them eligible to participate but who were not actually enrolled. The researchers also decided in advance that they were only interested in such RCTs if the non-enrolled patients had access to the same treatment or treatments that were given to the trial participants. Only 25 RCTs were found that met these requirements. There were nearly 18,000 patients in these studies; overall 45% had received treatment after randomization and 55% had not been randomized. Most of the RCTs were for treatments for cancer, problems of the heart and circulation, and obstetric and gynecological issues. The “clinical outcomes” recorded in the trials varied and included, for example, death/survival, recurrence of cancer, and improvement of hearing. In 22 of these trials, there were no statistically significant differences in clinical outcomes between patients who received random assignment of treatment (i.e., the RCT participants) and those who received individualized treatment assignment (eligible non-participants). In one trial the randomized patients fared better, and in the remaining two the nonrandomized patients had the better outcomes. These findings suggest that randomized treatment assignment as part of a clinical trial does not harm research participants, nor does there appear to be an advantage to being randomized in a trial. Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030188 . •The James Lind Library has been created to help patients and researchers understand fair tests of treatments in health care by illustrating how fair tests have developed over the centuries • Wikipedia , a free Internet encyclopedia that anyone can edit, has pages on RCTs
Medical society engagement in contentious policy reform: the Ethiopian Society for Obstetricians and Gynecologists (ESOG) and Ethiopia’s 2005 reform of its Penal Code on abortion
df5e5ed9-d831-49d0-9647-841049046934
5894080
Gynaecology[mh]
What underlies the willingness of a professional medical society to actively support policy reform, particularly socially contentious reform? In 2005, Ethiopia enacted a broad reform of its Penal Code that liberalized law on abortion. Second only to South Africa and Mozambique, Ethiopia arguably now in practice has one of the more liberal laws on abortion in sub-Saharan Africa (SSA) ( ). The Ethiopian Society of Obstetricians–Gynecologists (ESOG) publicly supported this reform. Rather than avoiding policy engagement, ESOG leadership researched and articulated the problem of abortion-related maternal mortality; conveyed it to the elite public and to government policy makers; proposed policies; and later helped develop regulations that further expanded access by making new categories of health professionals eligible to provide services ( ). ESOG leadership’s active public role in support of reform runs counter to theoretical predictions and empirical evidence that professional societies only actively engage in policy making when it relates to their professional self-interest or when it does not threaten to generate conflict among members ( ; ; ). This study identifies three sets of explanations for ESOG’s surprising engagement in controversial policymaking. First, both individual obstetrician–gynaecologists (ob–gyns) and ESOG as an organization had strong, almost definitional commitments to maternal mortality prevention. Second, typical organizational constraints on medical society involvement in controversial policymaking were absent or relaxed at the time of the reform. Third, the political environment was conducive to ESOG policy engagement: governmental support, a propitious political moment, and NGO ally encouragement. Background Ethiopia’s Penal Code reform took place in the context of dauntingly high levels of maternal mortality: in 2005, 676 women died for every 100 000 live births, and unsafe abortion had long been one of the three leading causes of these deaths ( ; ; ). However, Ethiopia’s cultural context would not lead one to expect the liberalization of abortion law. Major religious groups (Ethiopian Orthodox Christians, Muslims, Protestants) proscribe abortion except to save the life of the woman, and opposition to abortion among Ethiopians is also strong, as is common in non-industrialized, less secular countries ( ). Over 63% of the Ethiopian population views abortion as ‘never justifiable’ ( ). However, as is typical globally, women continue to obtain abortions despite social censure and even at times their own misgivings (Dixon-Mueller 1995). ESOG and ob–gyns in Ethiopia Ethiopian ob–gyns are a small and nascent group: fewer than 250 ob–gyns served between the 1950s and 2007 in Ethiopia, a country now of almost 100 million, and the first graduate training program was only established in 1980 ( ). Prior to this, Ethiopians received ob–gyn training on an ad hoc basis from foreign ob–gyns inside the country or travelled abroad for training. Between 1974 and 1990, many Ethiopians went to Eastern European countries for graduate medical education ( ). Thus their training was in countries where abortion services are integrated into the national health systems, and where a Marxist and more receptive, perspective on abortion and contraception prevailed ( ). The ob-gyn professional society—ESOG—is also recent, founded in 1992 with an explicit mission to address Ethiopia’s elevated maternal mortality ( ; ). Although ESOG grew from 75 founding members to 234 in 2012 ( ), representing the 176 ob–gyns in practice in the country in 2007 ( ), it remains a small organization with leadership historically serving on a volunteer basis. Globally, as well-educated, more affluent, socially connected, and most often male health professionals, ob–gyns have significant social capital, and as such have been influential contributors to national reproductive health policy ( ; ; ). Ethiopia is no exception. Ethiopian ob–gyns play at least a triple role: they provide medical care, train other providers and contribute to development of government guidelines and policies. They are the providers who address the most difficult cases, including those of women with severe complications of unsafe abortion who make it to medical facilities. They have high clinical autonomy due not only to their medical training, but also to national shortages of medical professionals ( ). As is the case in most SSA countries, the Ministry of Health relies on ob–gyns for technical advice and policy input. Reform of national laws on abortion in Ethiopia and SSA Liberalization of national abortion law, with its promise of access to legal and safer services, remains an infrequent event. Nonetheless, the global trend has been one of liberalization, even in SSA where laws are largely restrictive inheritances from colonial rulers ( ; ). However, reform of abortion law in SSA has been largely on limited grounds (to permit abortion to save the life of the woman, in cases of rape or incest, or to preserve the physical health of the woman). The region has the highest proportion of countries permitting abortion only to save the life of the woman ( ; ). Only South Africa and Mozambique (as of 1996 and 2014, respectively) allow women access to abortion in the first trimester without legal restrictions. Prior to Ethiopia’s 2005 reform, abortion was permitted only to ‘save the pregnant woman from grave and permanent danger to life or health’, and required approval by two doctors ( ). The final version of the revised 2005 Penal Code was not the complete decriminalization of abortion law initially proposed in Parliament, due to late-breaking public opposition from the Patriarch of the Ethiopian Orthodox Church ( ). Nonetheless, the revised Penal Code did include a lengthy list of exceptions, permitting abortion in the cases of rape, incest or foetal impairment; if pregnancy continuation or birth would endanger the health or life of the woman or foetus; if the woman had physical or mental disabilities; or if the woman was a minor physically or mentally unprepared for childbirth. Further, ‘mere statement from the woman’ was deemed sufficient proof of eligibility ( ), and regulations have further simplified access ( , ). Theory and empirical evidence On balance, both theory and historical experience suggest that medical societies would avoid involvement in socially contentious policy reform, in order to avoid conflict as well as to protect professional and economic interests and standing. Theory on medical society behaviour with regard to sensitive policymaking would predict disengagement. Sociological study of the professions describes physicians as having a monopoly on a technical body of knowledge that then accords them professional autonomy, public respect and authority ( ). The medical profession is seen as distinguished by its collective (rather than self-interested) orientation, and is expected to have a professional commitment to prioritize patient well-being over personal gain, thus creating trust and legitimizing physician authority ( , ). However, medical societies’ incentives are understood to be different from those of individual physicians, with societies having an incentive to avoid engaging in public policymaking if this risks provoking conflict for members ( , ). More recent theoretical research on the role of the medical professions with respect to policy, largely in more affluent countries, has focused less on physicians’ collective social contract orientation, and has predicted rather that organized medicine will pursue professional self-interest through public policy ( ; ; ). Professional self-interest is defined as providers’ economic interests and clinical autonomy, including control over how, when, and if services are delivered, and as a consequence, control over patients. Accordingly, theories of the professions would posit that a medical society would both resist expansion of the scope of practice of other medical professions (e.g. midwives, health officers or nurses) and try to retain authority over whether and when to provide abortion services to women ( ; ). ) also finds that organizational priorities and historical experience shape medical societies’ policy participation and focus during abortion law reform. While individual ob–gyns have been strong and effective voices in many national efforts to liberalize laws on abortion ( ; ; ; ), empirical research in both industrialized and low-income countries shows their medical societies not to have been at the forefront of socially contentious abortion law reform. Prior to the 1960s, physician’s medical associations in Western countries often sought to curtail women’s legal access to abortion in order to defend or build physician professional power vis-à-vis other medical cadres ( ; ; ; ). During the wave of abortion law reforms that took place in OECD countries starting in the late 1960s, few medical societies either called publicly for legal reform or were lead reform actors, instead many opposed reform as a challenge to physician power ( ; ; ). If medical associations entered into the policy debates, they did so after recognizing the inevitability of reform, and engaged to preserve physician discretion over services ( ; ). In Poland and Brazil, where public policy battles over abortion law have resurfaced, medical societies have largely remained on the sidelines ( ; ; ; ; ). Even in settings with high maternal mortality due to unsafe abortion, medical societies have largely been absent or more peripheral to reform efforts, such as in South Africa ( ; ); Guyana ( ); Nigeria ( ) and/or have largely only contributed to regulatory guidance after reform, as in Nepal ( ). Similarly, none of the five West African ob–gyn societies participating in a FIGO initiative to prevent unsafe abortion included reform of their national laws and policies on abortion as part of their action plans ( ). In the few cases where ob–gyns societies have publicly supported reform early on, there has either already been clear government support for reform and/or the reform has left authority over the procedure in the hands of physicians, as in India ( ; ) and Uruguay ( ; ); or where prominent ob–gyns have come under legal attack, as in Kenya ( ; ). On balance, historical experience and the theoretical literature do not predict medical society engagement with or support for abortion law reform. Ethiopia’s Penal Code reform took place in the context of dauntingly high levels of maternal mortality: in 2005, 676 women died for every 100 000 live births, and unsafe abortion had long been one of the three leading causes of these deaths ( ; ; ). However, Ethiopia’s cultural context would not lead one to expect the liberalization of abortion law. Major religious groups (Ethiopian Orthodox Christians, Muslims, Protestants) proscribe abortion except to save the life of the woman, and opposition to abortion among Ethiopians is also strong, as is common in non-industrialized, less secular countries ( ). Over 63% of the Ethiopian population views abortion as ‘never justifiable’ ( ). However, as is typical globally, women continue to obtain abortions despite social censure and even at times their own misgivings (Dixon-Mueller 1995). Ethiopian ob–gyns are a small and nascent group: fewer than 250 ob–gyns served between the 1950s and 2007 in Ethiopia, a country now of almost 100 million, and the first graduate training program was only established in 1980 ( ). Prior to this, Ethiopians received ob–gyn training on an ad hoc basis from foreign ob–gyns inside the country or travelled abroad for training. Between 1974 and 1990, many Ethiopians went to Eastern European countries for graduate medical education ( ). Thus their training was in countries where abortion services are integrated into the national health systems, and where a Marxist and more receptive, perspective on abortion and contraception prevailed ( ). The ob-gyn professional society—ESOG—is also recent, founded in 1992 with an explicit mission to address Ethiopia’s elevated maternal mortality ( ; ). Although ESOG grew from 75 founding members to 234 in 2012 ( ), representing the 176 ob–gyns in practice in the country in 2007 ( ), it remains a small organization with leadership historically serving on a volunteer basis. Globally, as well-educated, more affluent, socially connected, and most often male health professionals, ob–gyns have significant social capital, and as such have been influential contributors to national reproductive health policy ( ; ; ). Ethiopia is no exception. Ethiopian ob–gyns play at least a triple role: they provide medical care, train other providers and contribute to development of government guidelines and policies. They are the providers who address the most difficult cases, including those of women with severe complications of unsafe abortion who make it to medical facilities. They have high clinical autonomy due not only to their medical training, but also to national shortages of medical professionals ( ). As is the case in most SSA countries, the Ministry of Health relies on ob–gyns for technical advice and policy input. Liberalization of national abortion law, with its promise of access to legal and safer services, remains an infrequent event. Nonetheless, the global trend has been one of liberalization, even in SSA where laws are largely restrictive inheritances from colonial rulers ( ; ). However, reform of abortion law in SSA has been largely on limited grounds (to permit abortion to save the life of the woman, in cases of rape or incest, or to preserve the physical health of the woman). The region has the highest proportion of countries permitting abortion only to save the life of the woman ( ; ). Only South Africa and Mozambique (as of 1996 and 2014, respectively) allow women access to abortion in the first trimester without legal restrictions. Prior to Ethiopia’s 2005 reform, abortion was permitted only to ‘save the pregnant woman from grave and permanent danger to life or health’, and required approval by two doctors ( ). The final version of the revised 2005 Penal Code was not the complete decriminalization of abortion law initially proposed in Parliament, due to late-breaking public opposition from the Patriarch of the Ethiopian Orthodox Church ( ). Nonetheless, the revised Penal Code did include a lengthy list of exceptions, permitting abortion in the cases of rape, incest or foetal impairment; if pregnancy continuation or birth would endanger the health or life of the woman or foetus; if the woman had physical or mental disabilities; or if the woman was a minor physically or mentally unprepared for childbirth. Further, ‘mere statement from the woman’ was deemed sufficient proof of eligibility ( ), and regulations have further simplified access ( , ). On balance, both theory and historical experience suggest that medical societies would avoid involvement in socially contentious policy reform, in order to avoid conflict as well as to protect professional and economic interests and standing. Theory on medical society behaviour with regard to sensitive policymaking would predict disengagement. Sociological study of the professions describes physicians as having a monopoly on a technical body of knowledge that then accords them professional autonomy, public respect and authority ( ). The medical profession is seen as distinguished by its collective (rather than self-interested) orientation, and is expected to have a professional commitment to prioritize patient well-being over personal gain, thus creating trust and legitimizing physician authority ( , ). However, medical societies’ incentives are understood to be different from those of individual physicians, with societies having an incentive to avoid engaging in public policymaking if this risks provoking conflict for members ( , ). More recent theoretical research on the role of the medical professions with respect to policy, largely in more affluent countries, has focused less on physicians’ collective social contract orientation, and has predicted rather that organized medicine will pursue professional self-interest through public policy ( ; ; ). Professional self-interest is defined as providers’ economic interests and clinical autonomy, including control over how, when, and if services are delivered, and as a consequence, control over patients. Accordingly, theories of the professions would posit that a medical society would both resist expansion of the scope of practice of other medical professions (e.g. midwives, health officers or nurses) and try to retain authority over whether and when to provide abortion services to women ( ; ). ) also finds that organizational priorities and historical experience shape medical societies’ policy participation and focus during abortion law reform. While individual ob–gyns have been strong and effective voices in many national efforts to liberalize laws on abortion ( ; ; ; ), empirical research in both industrialized and low-income countries shows their medical societies not to have been at the forefront of socially contentious abortion law reform. Prior to the 1960s, physician’s medical associations in Western countries often sought to curtail women’s legal access to abortion in order to defend or build physician professional power vis-à-vis other medical cadres ( ; ; ; ). During the wave of abortion law reforms that took place in OECD countries starting in the late 1960s, few medical societies either called publicly for legal reform or were lead reform actors, instead many opposed reform as a challenge to physician power ( ; ; ). If medical associations entered into the policy debates, they did so after recognizing the inevitability of reform, and engaged to preserve physician discretion over services ( ; ). In Poland and Brazil, where public policy battles over abortion law have resurfaced, medical societies have largely remained on the sidelines ( ; ; ; ; ). Even in settings with high maternal mortality due to unsafe abortion, medical societies have largely been absent or more peripheral to reform efforts, such as in South Africa ( ; ); Guyana ( ); Nigeria ( ) and/or have largely only contributed to regulatory guidance after reform, as in Nepal ( ). Similarly, none of the five West African ob–gyn societies participating in a FIGO initiative to prevent unsafe abortion included reform of their national laws and policies on abortion as part of their action plans ( ). In the few cases where ob–gyns societies have publicly supported reform early on, there has either already been clear government support for reform and/or the reform has left authority over the procedure in the hands of physicians, as in India ( ; ) and Uruguay ( ; ); or where prominent ob–gyns have come under legal attack, as in Kenya ( ; ). On balance, historical experience and the theoretical literature do not predict medical society engagement with or support for abortion law reform. This retrospective case study examines the factors behind professional medical society support for abortion law liberalization. It used observational data (in-depth interviews); as well as ESOG and government documents; opinion data; and secondary research. The University of California, Berkeley Committee for Protection of Human Subjects provided ethical approval (Protocol ID: 2011-03010). The 54 people interviewed are a purposive sampling of individuals familiar with the 2005 reform of the Penal Code in Ethiopia, selected using a combination of reputational and positional criteria ( ). Men were 60% of all informants; all but two informants were Ethiopian nationals. Interviewees included 10 Ethiopian ob–gyns, 6 of ESOG’s 10 past and current Presidents and 5 of ESOG’s nine Secretaries. All were men affiliated with maternity or university hospitals, Ethiopian and international NGOs, and/or the Ethiopian federal government. All ob–gyns contacted were interviewed ( ). All but two interviews were conducted by the author in English in 2012 (7 years post reform), as all interviewees had tertiary-level education for which English is the medium of instruction. Two other interviews of religious leaders were conducted in Amharic and summarized in English ( ). All but 10 interviews were recorded and transcribed verbatim; unrecorded interviews were summarized shortly after completion. Interviews averaged 55 min in length, with a range from 20 min to 4 h. All interview transcripts/write-ups were then coded using HyperRESEARCH 3.5.2 qualitative data analysis software ( ). They were first descriptively coded for professional affiliation and gender, then to identify actors, roles, interests and timing of key events. Quotations here are all from Ethiopian informants. Questions in the 2012 semi-structured interviews were on the reform sequence and actors; the framing of the debate; the reform roles and actions of the informant’s organization; and summative questions on perceived causes of reform. Phenomena of interest are ESOG’s position on liberalization of the Penal Code on abortion; its motivations, roles, and actions related to reform; the timing of key events; and the factors governing active medical society engagement in reform. As the argument here is that this Ethiopian case does not match empirical evidence or theoretical predictions related to medical society involvement in contentious policy reform, I first produced a chronology of the reform with special attention to the contributions and motivations of the ob–gyn society (ESOG), and then looked for factors that might mitigate forces discouraging ESOG policy participation predicted by theory and historical experience. Secondary and other data sources helped to validate statements made by interviewees and fill any gaps. If events were described in similar ways by multiple respondents from diverse types of organizations, all involved in the reform, saturation was deemed to have been reached. As with any retrospective research, this study has limitations, primarily recall and social desirability bias. The reform’s prominence may reduce recall bias. I tried to mitigate social desirability bias by interviewing a wide variety of informants; comparing individuals’ statements with what they might have an institutional or reputational incentive to say, inquiring about the roles of other actors; asking informants to critique their own assessments; and saving questions about impact for a later stage ( ; ). Further, bias resulting from respondents’ desire to claim credit for reform is likely counterbalanced by discomfort with the idea of ‘abortion upon request’ and desire to avoid broader public attention around abortion. Review of the three sets of factors associated with ESOG policy engagement follows below, covering ESOG organizational and individual member commitments; organizational structure and experience; and the receptive political environment. ESOG support for reform grounded in commitment to reducing maternal mortality Ethiopia’s ob–gyns had deep concerns about maternal mortality as well as broad support for liberalization of the law on abortion. Before the reform, ESOG’s 2000 nationally representative survey of medical providers showed that 80% of ob–gyns viewed the existing (1957) law as overly restrictive, and that over three quarters believed that preventing the interruption of schooling was an acceptable rationale for legal abortion ( ). Interviews suggest that individual ob–gyns’ motivations sprang from their direct experience caring for women dying from unsafe abortions and their belief that they and their profession were the ones with the knowledge, skills, and experience to prevent these deaths. The magnitude and the mortality, the mortality. And that it’s affecting mainly the young people. You see, the eyes of these young people remain in your eyes, when they are almost dying. They see you, they say, “please save me.” This is almost a daily practice in our Gyne ward when we are residents, in the so-called ‘septic room.’ There are three or four people, going into septic shock, very eager to survive. But you can’t save them, because they came so late. And there were no proper antibiotics at that time. For an ob-gyn, it was the kind of thing that you start to hate the … We used to operate at least one patient a day, at least every day, with abortion complications. Interview 12 (Ob-gyn) Several ESOG informants described this experience of being unable to save young women with their lives ahead of them as motivating their support for legal reform. Ob–gyns knew of the impacts of unsafe abortion not only due to their work, but also as a result of their research on maternal mortality and its causes in Ethiopia, which was shared at ESOG and other meetings ( ; ). Ob–gyns interviewees spoke of their special responsibility, even moral obligation, to speak out for policy change, as those who best knew the scope and impacts of unsafe abortion. These (obstetrician–gynaecologists) are the learned people […] They know the problem, they know much of the causes of the problem, and they know the solutions. […] So again, crossing their arms and sitting cannot be a way out, they have to do something, right? These people have a conscience. If they don’t do it for women who are suffering or dying …. There is this moral issue, right? Interview 14 (Ob-gyn) Ob–gyns also knew of safe alternatives to the unsafe abortions that sent injured women to the overflowing labour and delivery wards of the capital’s hospitals. Through the training of many senior ob–gyns in Soviet bloc countries, as well as contact with providers of safe abortion services in Addis Ababa, ESOG leaders knew this maternal mortality could be reduced through access to safe abortion services ( ). Since at least the early 1990s, a significant proportion of the services in the capital, Addis Ababa, had been safe abortion services, as a number of medical professionals, including those at the Dejazmach Balcha Russian Red Cross Hospital and at Marie Stopes International, Ethiopia, provided safe abortion services ( ). Further, much of ESOG’s leadership, including at least three past ESOG Presidents, had been key movers in the introduction of post-abortion care (PAC), a comprehensive set of clinical and health interventions to preserve the life and health of women presenting with complications of unsafe abortion. The introduction of PAC not only brought clinical skills to save lives (the same used for safe abortion care), but it offered an opportunity for providers to clarify and articulate their values and also their roles and professional responsibilities regarding unsafe abortion and abortion more generally ( ). The introduction of post-abortion care (PAC) services here in Ethiopia […] has given us actually the competence to familiarize everybody with the topic and with the quality of care. So if that was not there, definitely the groundwork would have been missing, so it would have been difficult to jump from nowhere to policy formulation, no way! Interview 33 (Ob-gyn) Further, the way that ob–gyns had discussed the rationale for PAC—in terms of saving women’s lives—foreshadowed how ESOG leaders framed discussion later during the 2005 reform process. ESOG founding rationale and mission When discussing ESOG’s involvement in reform, all ob–gyns interviewees began by pointing to ESOG’s founding motivation and explicit mission of addressing Ethiopia’s elevated levels of maternal morbidity and mortality ( ; ). They viewed this mission, as well as the research and practice of ESOG’s founders, as directly consonant with support for reform of the country’s laws on abortion. They noted that ESOG’s founding was in part sparked by the 1987 inaugural Safe Motherhood conference. There was consensus among interviewees that ESOG had taken a clear position that reform was needed, and that internal opposition was absent or muted. Several referenced ESOG’s 2000 nationally representative provider survey as revealing the profession’s strong support for liberalization of the law ( ). ESOG’s President in 2001 articulated the position to the media that: ‘pregnancies which occur through rape, inefficacy of contraceptives as well as pregnancies that endanger the life of the mother should be given free access to safe abortion’ ( ). The one dissenting ob–gyns interviewed was more troubled by how ESOG’s focused on advocacy to the exclusion of professional development activities. A former President explained that while there might have been discomfort among some ESOG members about the reform or about ESOG’s public stance, they would not have voiced opposition explicitly or publicly due to loyalty to the profession. Yes, ESOG has always been positive, I mean even if there are people who are opposed to [abortion reform …] probably they know that they are not many, or probably they don’t want as a person to be included, but as an institution, it is okay. I don’t know how they see it, but there has never been, or I have never witnessed, anybody who has raised his or her hand and said this is immoral, this is negative, this is … you know. I have never heard that in the meetings of ESOG that I have attended. Interview 14 (Ob-gyn) At the same time, ESOG’s leadership recognized the organization’s role as a professional association representing membership, rather than an advocacy group with a specific policy platform, and the constraints this placed on advocacy. They also noted that ESOG had neither the clout of a trade union nor the strength or inclination to have an adversarial relationship with government. They emphasized ESOG’s close relationship with government, and saw their role as serving as a resource for government. This is just a professional society, it’s not like a labour union whereby it imposes a new policy. But we closely cooperate and collaborate with the Ministry of Health in terms of formulating policies or act as resource person for whatever activities the Federal Ministry of Health undertakes. Interview 33 (Ob-gyn) Reduced constraints on ESOG’s policy engagement Interviews as well as review of ESOG documents and secondary data suggest that ESOG’s active role in reform is also explained by the newness of the organization and its streamlined structure, by a favourable political landscape, and by the high value reform allies placed on ESOG’s contributions. ESOG’s organizational structure and stage of development Several features of ESOG’s organizational structure gave leadership more latitude to engage in policymaking than might otherwise have been expected. It was at an early stage of organizational development, one where it was highly motivated by mission, but relatively lacking in systems and resources ( ). While ESOG had management and governance systems, they remained relatively flexible, allowing volunteer leadership to pursue policy goals with fewer impediments. Further, as ESOG’s leadership worked on a volunteer basis, it tended to attract those motivated to work on topics and activities to which they were personally committed ( ). ESOG’s leaders during this period of reform included several who led the introduction of PAC to Ethiopia, and thus were more accustomed to dealing with the negative societal views on abortion. ESOG leadership had further latitude because of how responsibility for work was allocated. At ESOG’s annual meeting, the General Assembly typically delegated broad responsibility for project implementation during the upcoming year to ESOG’s President and Executive Board or, in this case, to the volunteers on the sub-committee working on reform. This allowed those working on a particular project or policy reform to operate with substantial autonomy. The limited formal contact between leaders and membership also reduced potential obstacles for ESOG leaders’ public contributions to reform. Further, ESOG’s organizational youth meant that it lacked precedents prescribing or proscribing particular types of policy activity. Previous ESOG policy work had not been on socially contentious issues such as abortion, nor had it been on high profile legislative reform. Given the absence of such precedents, ESOG leaders had more room to contribute actively and publicly to policymaking. Finally, ESOG was also relatively less susceptible to donor pressure, as it had few external sources of financial support prior to 2006 when the final Ministry of Health regulation on abortion was issued ( ). Significantly, ESOG had no direct U.S. government funding until after the reform. During Ethiopia’s reform period, U.S. international reproductive health assistance was governed by the Mexico City policy (known also as the ‘Global Gag Rule’) barring non-U.S. organizations receiving U.S. assistance from providing abortion care services or referrals or speaking publicly on abortion legal reform, even if financed by other sources ( ; ). There was the Gag Rule and so on at that time. Even ESOG didn’t have, didn’t get a lot of support in terms of projects at that time, because ESOG was considered as an institution that promoted abortion. At that time, we were having very much difficulty in getting projects and so on, research support. Because USAID and everybody considered it as an abortion organization. Interview 12 (Ob-gyn) By involving the organization in policy on abortion during a U.S. Republican administration (2000–08), ESOG’s leaders knew they were forgoing potential U.S. government funding. Thus ESOG’s policy work occurred counter to its financial interests. Favourable political environment Virtually all interviewees described a political environment favourable to reform. In particular, the government’s well-known ideological commitment to improving women’s status and enactment of progressive policies that were foundations for later reform, a moment of relative democratic opening for civil society advocacy, and the active support from other advocates, all reduced the risks of participating for ESOG. Ethiopia’s reform took place under the aegis of the current political regime, the Ethiopian People’s Revolutionary Democratic Front (EPRDF), a coalition of groups led by the Marxist Tigray People’s Liberation Front (TPLF). The ruling party’s history of policies enacted to improve women’s status, of proud secularism ( ; ), and specific senior leadership guidance to advance liberalization of the law ( ; ), created a political environment friendly to reform. The government has a good and strong position on promoting women’s rights, starting right from the bush. There were many women soldiers involved in the fighting. Interview 44 (Ob-gyn) The government came to power in 1991 with a record and a clear agenda to advance progressive policies related to women. Their swift adoption of a progressive ), a ), and particularly the new Constitution (1994), made it clear there would be a complete overhaul of the 1957 Penal Code, with an explicit eye for expanding the ‘democratic rights and freedoms’ of women, as well as reducing the ‘grave injuries and sufferings caused to women and children by reason of harmful traditional practices’ (Constitution, Article 35, Rights of Women ). ESOG leaders and other reform supporters all alluded to the Constitution and allied policies as explaining why reform of the law on abortion later took place. The Constitution was very specific on the importance of protecting and promoting women’s rights, and against harmful traditional practices. With a favourable Constitution, it is very likely that there would be reforms aligning policy with the Constitution. Interview 22 (Ob-gyn) However, although interviewees saw the TPLF’s progressive ideology and track record as promising for reform of existing laws related to women, including abortion, they noted that high-level political leaders did not want to take action without public demonstration of support. The government saw it as the responsibility of civil society to educate the public and galvanize support, and permitted civil society policy advocacy insofar as it was ‘harmonized’ with the government’s developmental agenda ( ). This approach to policy change grew from its experience as an insurgency movement in rural Tigray fostering social change ( ; ; ). Ok, you bring it forward, he [the government] is not going to cook it and finish everything and give it to you. But you have to bring it up. You need initiators to take it up to the top of the ladder, to make sure it'll be blessed by them. It’s progressive, they have been for it, and they were for it, and they encouraged the movement to take its course. They facilitated everything, took it up to the Parliament and had it passed through. Who else could do it for you, unless you have got a government that is very cooperative and progressive? Interview 33 (Ob-gyn) The government formally channelled civil society participation, including that of ESOG, through the Reproductive Health Working Group (RHWG). It convened the RHWG in 2002 to educate the public about maternal mortality and unsafe abortion, to mobilize support for reform, to offer technical assistance to government leaders, and later to develop the law’s regulations ( ). Members of the RHWG, including ESOG representatives, presented in public forums, on the radio, and at national and regional government hearings on the proposed Penal Code reforms. ESOG, and NGO leadership generally, saw a government expectation of reform, and of civil society contribution to this reform. Reform supporters also made use of the historic moment of relative democratic openness in Ethiopia’s political system prior to a 2005 crackdown ( ). Informants saw the period as one of openness and policy possibility. Yes, that period was a very good time for civil society, for the blooming of civil society, for creation of new NGOs and everything. I think it was very conducive for bringing about new ideas and for supporting family planning and a lot of different issues. So in terms of the environment, not in terms of the issue, it was more conducive than what we have now. Because it has a lot of space. Interview 25 (Women’s rights leader) In addition, the decision to overhaul the entire Penal Code provided the structured opportunity for advocates to push for inclusion of reform of the law on abortion. As noted above, during the Penal Code reform, the national government left open space for NGOs to present their experience, research, and perspectives; and to inform policymakers and the elite public. Support for ESOG contributions from other advocates ESOG’s reform collaborators saw ESOG and obstetrician-gynaecologists as essential contributors to policy discussions—‘their absence would have been fatal’ ( Interview 6 , Ethiopian International NGO leader)—further smoothing the way for ESOG participation. Civil society support for reform sprang from two sources. The first was women’s organizations, most notably the Ethiopian Women Lawyers’ Association (EWLA), championing a broad set of reforms to improve women’s social and economic status ( ; ). The second was a network of reproductive health NGOs seeking to limit maternal mortality, particularly that due to unsafe abortion, and to expand women’s access to reproductive health services. ESOG’s involvement was with this second group. Reform allies ensured that ESOG leaders were prominent if not lead speakers at public events. While interviewees didn’t necessarily point to ESOG as the most proactive reform supporter, they uniformly attested to the high value of ESOG’s participation. It’s true. We want them [obstetrician-gynaecologists] on the forefront because our people look up to them on these issues. And also, you know, one voice of an ob-gyn is worth a million of ours. And it was so important to make sure that we are working with them. Interview 6 (NGO leader) Other advocates emphasized the special value of ESOG and ob–gyns as messengers for reform. They brought credibility to their policy advocacy, particularly in the eyes of more traditional representatives in Parliament and the regional governments. One women’s rights advocate emphasized not only the complementarity of the strategies of women’s advocates and those of ESOG, but also the special value of testimony from male ob–gyns, particularly with more conservative people. What really complemented our story is the involvement of the obstetrician-gynaecologists, because they are mostly men. There are women, but the majority of them are men. It is not westernized women shouting and saying ‘you have to do this and this’ on television and camera. It was these doctors, men, saying that on camera and in the meetings. […] So they brought the numbers; we brought the stories. Those complemented each other very well in Parliament, for dealing with religious groups, and for other reluctant people. Interview 25 (Women’s rights group leader) Most advocates saw ESOG member involvement as critical to legitimizing and detoxifying public discussion of unsafe abortion as a policy problem. Ethiopia’s ob–gyns had deep concerns about maternal mortality as well as broad support for liberalization of the law on abortion. Before the reform, ESOG’s 2000 nationally representative survey of medical providers showed that 80% of ob–gyns viewed the existing (1957) law as overly restrictive, and that over three quarters believed that preventing the interruption of schooling was an acceptable rationale for legal abortion ( ). Interviews suggest that individual ob–gyns’ motivations sprang from their direct experience caring for women dying from unsafe abortions and their belief that they and their profession were the ones with the knowledge, skills, and experience to prevent these deaths. The magnitude and the mortality, the mortality. And that it’s affecting mainly the young people. You see, the eyes of these young people remain in your eyes, when they are almost dying. They see you, they say, “please save me.” This is almost a daily practice in our Gyne ward when we are residents, in the so-called ‘septic room.’ There are three or four people, going into septic shock, very eager to survive. But you can’t save them, because they came so late. And there were no proper antibiotics at that time. For an ob-gyn, it was the kind of thing that you start to hate the … We used to operate at least one patient a day, at least every day, with abortion complications. Interview 12 (Ob-gyn) Several ESOG informants described this experience of being unable to save young women with their lives ahead of them as motivating their support for legal reform. Ob–gyns knew of the impacts of unsafe abortion not only due to their work, but also as a result of their research on maternal mortality and its causes in Ethiopia, which was shared at ESOG and other meetings ( ; ). Ob–gyns interviewees spoke of their special responsibility, even moral obligation, to speak out for policy change, as those who best knew the scope and impacts of unsafe abortion. These (obstetrician–gynaecologists) are the learned people […] They know the problem, they know much of the causes of the problem, and they know the solutions. […] So again, crossing their arms and sitting cannot be a way out, they have to do something, right? These people have a conscience. If they don’t do it for women who are suffering or dying …. There is this moral issue, right? Interview 14 (Ob-gyn) Ob–gyns also knew of safe alternatives to the unsafe abortions that sent injured women to the overflowing labour and delivery wards of the capital’s hospitals. Through the training of many senior ob–gyns in Soviet bloc countries, as well as contact with providers of safe abortion services in Addis Ababa, ESOG leaders knew this maternal mortality could be reduced through access to safe abortion services ( ). Since at least the early 1990s, a significant proportion of the services in the capital, Addis Ababa, had been safe abortion services, as a number of medical professionals, including those at the Dejazmach Balcha Russian Red Cross Hospital and at Marie Stopes International, Ethiopia, provided safe abortion services ( ). Further, much of ESOG’s leadership, including at least three past ESOG Presidents, had been key movers in the introduction of post-abortion care (PAC), a comprehensive set of clinical and health interventions to preserve the life and health of women presenting with complications of unsafe abortion. The introduction of PAC not only brought clinical skills to save lives (the same used for safe abortion care), but it offered an opportunity for providers to clarify and articulate their values and also their roles and professional responsibilities regarding unsafe abortion and abortion more generally ( ). The introduction of post-abortion care (PAC) services here in Ethiopia […] has given us actually the competence to familiarize everybody with the topic and with the quality of care. So if that was not there, definitely the groundwork would have been missing, so it would have been difficult to jump from nowhere to policy formulation, no way! Interview 33 (Ob-gyn) Further, the way that ob–gyns had discussed the rationale for PAC—in terms of saving women’s lives—foreshadowed how ESOG leaders framed discussion later during the 2005 reform process. ESOG founding rationale and mission When discussing ESOG’s involvement in reform, all ob–gyns interviewees began by pointing to ESOG’s founding motivation and explicit mission of addressing Ethiopia’s elevated levels of maternal morbidity and mortality ( ; ). They viewed this mission, as well as the research and practice of ESOG’s founders, as directly consonant with support for reform of the country’s laws on abortion. They noted that ESOG’s founding was in part sparked by the 1987 inaugural Safe Motherhood conference. There was consensus among interviewees that ESOG had taken a clear position that reform was needed, and that internal opposition was absent or muted. Several referenced ESOG’s 2000 nationally representative provider survey as revealing the profession’s strong support for liberalization of the law ( ). ESOG’s President in 2001 articulated the position to the media that: ‘pregnancies which occur through rape, inefficacy of contraceptives as well as pregnancies that endanger the life of the mother should be given free access to safe abortion’ ( ). The one dissenting ob–gyns interviewed was more troubled by how ESOG’s focused on advocacy to the exclusion of professional development activities. A former President explained that while there might have been discomfort among some ESOG members about the reform or about ESOG’s public stance, they would not have voiced opposition explicitly or publicly due to loyalty to the profession. Yes, ESOG has always been positive, I mean even if there are people who are opposed to [abortion reform …] probably they know that they are not many, or probably they don’t want as a person to be included, but as an institution, it is okay. I don’t know how they see it, but there has never been, or I have never witnessed, anybody who has raised his or her hand and said this is immoral, this is negative, this is … you know. I have never heard that in the meetings of ESOG that I have attended. Interview 14 (Ob-gyn) At the same time, ESOG’s leadership recognized the organization’s role as a professional association representing membership, rather than an advocacy group with a specific policy platform, and the constraints this placed on advocacy. They also noted that ESOG had neither the clout of a trade union nor the strength or inclination to have an adversarial relationship with government. They emphasized ESOG’s close relationship with government, and saw their role as serving as a resource for government. This is just a professional society, it’s not like a labour union whereby it imposes a new policy. But we closely cooperate and collaborate with the Ministry of Health in terms of formulating policies or act as resource person for whatever activities the Federal Ministry of Health undertakes. Interview 33 (Ob-gyn) When discussing ESOG’s involvement in reform, all ob–gyns interviewees began by pointing to ESOG’s founding motivation and explicit mission of addressing Ethiopia’s elevated levels of maternal morbidity and mortality ( ; ). They viewed this mission, as well as the research and practice of ESOG’s founders, as directly consonant with support for reform of the country’s laws on abortion. They noted that ESOG’s founding was in part sparked by the 1987 inaugural Safe Motherhood conference. There was consensus among interviewees that ESOG had taken a clear position that reform was needed, and that internal opposition was absent or muted. Several referenced ESOG’s 2000 nationally representative provider survey as revealing the profession’s strong support for liberalization of the law ( ). ESOG’s President in 2001 articulated the position to the media that: ‘pregnancies which occur through rape, inefficacy of contraceptives as well as pregnancies that endanger the life of the mother should be given free access to safe abortion’ ( ). The one dissenting ob–gyns interviewed was more troubled by how ESOG’s focused on advocacy to the exclusion of professional development activities. A former President explained that while there might have been discomfort among some ESOG members about the reform or about ESOG’s public stance, they would not have voiced opposition explicitly or publicly due to loyalty to the profession. Yes, ESOG has always been positive, I mean even if there are people who are opposed to [abortion reform …] probably they know that they are not many, or probably they don’t want as a person to be included, but as an institution, it is okay. I don’t know how they see it, but there has never been, or I have never witnessed, anybody who has raised his or her hand and said this is immoral, this is negative, this is … you know. I have never heard that in the meetings of ESOG that I have attended. Interview 14 (Ob-gyn) At the same time, ESOG’s leadership recognized the organization’s role as a professional association representing membership, rather than an advocacy group with a specific policy platform, and the constraints this placed on advocacy. They also noted that ESOG had neither the clout of a trade union nor the strength or inclination to have an adversarial relationship with government. They emphasized ESOG’s close relationship with government, and saw their role as serving as a resource for government. This is just a professional society, it’s not like a labour union whereby it imposes a new policy. But we closely cooperate and collaborate with the Ministry of Health in terms of formulating policies or act as resource person for whatever activities the Federal Ministry of Health undertakes. Interview 33 (Ob-gyn) Interviews as well as review of ESOG documents and secondary data suggest that ESOG’s active role in reform is also explained by the newness of the organization and its streamlined structure, by a favourable political landscape, and by the high value reform allies placed on ESOG’s contributions. ESOG’s organizational structure and stage of development Several features of ESOG’s organizational structure gave leadership more latitude to engage in policymaking than might otherwise have been expected. It was at an early stage of organizational development, one where it was highly motivated by mission, but relatively lacking in systems and resources ( ). While ESOG had management and governance systems, they remained relatively flexible, allowing volunteer leadership to pursue policy goals with fewer impediments. Further, as ESOG’s leadership worked on a volunteer basis, it tended to attract those motivated to work on topics and activities to which they were personally committed ( ). ESOG’s leaders during this period of reform included several who led the introduction of PAC to Ethiopia, and thus were more accustomed to dealing with the negative societal views on abortion. ESOG leadership had further latitude because of how responsibility for work was allocated. At ESOG’s annual meeting, the General Assembly typically delegated broad responsibility for project implementation during the upcoming year to ESOG’s President and Executive Board or, in this case, to the volunteers on the sub-committee working on reform. This allowed those working on a particular project or policy reform to operate with substantial autonomy. The limited formal contact between leaders and membership also reduced potential obstacles for ESOG leaders’ public contributions to reform. Further, ESOG’s organizational youth meant that it lacked precedents prescribing or proscribing particular types of policy activity. Previous ESOG policy work had not been on socially contentious issues such as abortion, nor had it been on high profile legislative reform. Given the absence of such precedents, ESOG leaders had more room to contribute actively and publicly to policymaking. Finally, ESOG was also relatively less susceptible to donor pressure, as it had few external sources of financial support prior to 2006 when the final Ministry of Health regulation on abortion was issued ( ). Significantly, ESOG had no direct U.S. government funding until after the reform. During Ethiopia’s reform period, U.S. international reproductive health assistance was governed by the Mexico City policy (known also as the ‘Global Gag Rule’) barring non-U.S. organizations receiving U.S. assistance from providing abortion care services or referrals or speaking publicly on abortion legal reform, even if financed by other sources ( ; ). There was the Gag Rule and so on at that time. Even ESOG didn’t have, didn’t get a lot of support in terms of projects at that time, because ESOG was considered as an institution that promoted abortion. At that time, we were having very much difficulty in getting projects and so on, research support. Because USAID and everybody considered it as an abortion organization. Interview 12 (Ob-gyn) By involving the organization in policy on abortion during a U.S. Republican administration (2000–08), ESOG’s leaders knew they were forgoing potential U.S. government funding. Thus ESOG’s policy work occurred counter to its financial interests. Favourable political environment Virtually all interviewees described a political environment favourable to reform. In particular, the government’s well-known ideological commitment to improving women’s status and enactment of progressive policies that were foundations for later reform, a moment of relative democratic opening for civil society advocacy, and the active support from other advocates, all reduced the risks of participating for ESOG. Ethiopia’s reform took place under the aegis of the current political regime, the Ethiopian People’s Revolutionary Democratic Front (EPRDF), a coalition of groups led by the Marxist Tigray People’s Liberation Front (TPLF). The ruling party’s history of policies enacted to improve women’s status, of proud secularism ( ; ), and specific senior leadership guidance to advance liberalization of the law ( ; ), created a political environment friendly to reform. The government has a good and strong position on promoting women’s rights, starting right from the bush. There were many women soldiers involved in the fighting. Interview 44 (Ob-gyn) The government came to power in 1991 with a record and a clear agenda to advance progressive policies related to women. Their swift adoption of a progressive ), a ), and particularly the new Constitution (1994), made it clear there would be a complete overhaul of the 1957 Penal Code, with an explicit eye for expanding the ‘democratic rights and freedoms’ of women, as well as reducing the ‘grave injuries and sufferings caused to women and children by reason of harmful traditional practices’ (Constitution, Article 35, Rights of Women ). ESOG leaders and other reform supporters all alluded to the Constitution and allied policies as explaining why reform of the law on abortion later took place. The Constitution was very specific on the importance of protecting and promoting women’s rights, and against harmful traditional practices. With a favourable Constitution, it is very likely that there would be reforms aligning policy with the Constitution. Interview 22 (Ob-gyn) However, although interviewees saw the TPLF’s progressive ideology and track record as promising for reform of existing laws related to women, including abortion, they noted that high-level political leaders did not want to take action without public demonstration of support. The government saw it as the responsibility of civil society to educate the public and galvanize support, and permitted civil society policy advocacy insofar as it was ‘harmonized’ with the government’s developmental agenda ( ). This approach to policy change grew from its experience as an insurgency movement in rural Tigray fostering social change ( ; ; ). Ok, you bring it forward, he [the government] is not going to cook it and finish everything and give it to you. But you have to bring it up. You need initiators to take it up to the top of the ladder, to make sure it'll be blessed by them. It’s progressive, they have been for it, and they were for it, and they encouraged the movement to take its course. They facilitated everything, took it up to the Parliament and had it passed through. Who else could do it for you, unless you have got a government that is very cooperative and progressive? Interview 33 (Ob-gyn) The government formally channelled civil society participation, including that of ESOG, through the Reproductive Health Working Group (RHWG). It convened the RHWG in 2002 to educate the public about maternal mortality and unsafe abortion, to mobilize support for reform, to offer technical assistance to government leaders, and later to develop the law’s regulations ( ). Members of the RHWG, including ESOG representatives, presented in public forums, on the radio, and at national and regional government hearings on the proposed Penal Code reforms. ESOG, and NGO leadership generally, saw a government expectation of reform, and of civil society contribution to this reform. Reform supporters also made use of the historic moment of relative democratic openness in Ethiopia’s political system prior to a 2005 crackdown ( ). Informants saw the period as one of openness and policy possibility. Yes, that period was a very good time for civil society, for the blooming of civil society, for creation of new NGOs and everything. I think it was very conducive for bringing about new ideas and for supporting family planning and a lot of different issues. So in terms of the environment, not in terms of the issue, it was more conducive than what we have now. Because it has a lot of space. Interview 25 (Women’s rights leader) In addition, the decision to overhaul the entire Penal Code provided the structured opportunity for advocates to push for inclusion of reform of the law on abortion. As noted above, during the Penal Code reform, the national government left open space for NGOs to present their experience, research, and perspectives; and to inform policymakers and the elite public. Support for ESOG contributions from other advocates ESOG’s reform collaborators saw ESOG and obstetrician-gynaecologists as essential contributors to policy discussions—‘their absence would have been fatal’ ( Interview 6 , Ethiopian International NGO leader)—further smoothing the way for ESOG participation. Civil society support for reform sprang from two sources. The first was women’s organizations, most notably the Ethiopian Women Lawyers’ Association (EWLA), championing a broad set of reforms to improve women’s social and economic status ( ; ). The second was a network of reproductive health NGOs seeking to limit maternal mortality, particularly that due to unsafe abortion, and to expand women’s access to reproductive health services. ESOG’s involvement was with this second group. Reform allies ensured that ESOG leaders were prominent if not lead speakers at public events. While interviewees didn’t necessarily point to ESOG as the most proactive reform supporter, they uniformly attested to the high value of ESOG’s participation. It’s true. We want them [obstetrician-gynaecologists] on the forefront because our people look up to them on these issues. And also, you know, one voice of an ob-gyn is worth a million of ours. And it was so important to make sure that we are working with them. Interview 6 (NGO leader) Other advocates emphasized the special value of ESOG and ob–gyns as messengers for reform. They brought credibility to their policy advocacy, particularly in the eyes of more traditional representatives in Parliament and the regional governments. One women’s rights advocate emphasized not only the complementarity of the strategies of women’s advocates and those of ESOG, but also the special value of testimony from male ob–gyns, particularly with more conservative people. What really complemented our story is the involvement of the obstetrician-gynaecologists, because they are mostly men. There are women, but the majority of them are men. It is not westernized women shouting and saying ‘you have to do this and this’ on television and camera. It was these doctors, men, saying that on camera and in the meetings. […] So they brought the numbers; we brought the stories. Those complemented each other very well in Parliament, for dealing with religious groups, and for other reluctant people. Interview 25 (Women’s rights group leader) Most advocates saw ESOG member involvement as critical to legitimizing and detoxifying public discussion of unsafe abortion as a policy problem. Several features of ESOG’s organizational structure gave leadership more latitude to engage in policymaking than might otherwise have been expected. It was at an early stage of organizational development, one where it was highly motivated by mission, but relatively lacking in systems and resources ( ). While ESOG had management and governance systems, they remained relatively flexible, allowing volunteer leadership to pursue policy goals with fewer impediments. Further, as ESOG’s leadership worked on a volunteer basis, it tended to attract those motivated to work on topics and activities to which they were personally committed ( ). ESOG’s leaders during this period of reform included several who led the introduction of PAC to Ethiopia, and thus were more accustomed to dealing with the negative societal views on abortion. ESOG leadership had further latitude because of how responsibility for work was allocated. At ESOG’s annual meeting, the General Assembly typically delegated broad responsibility for project implementation during the upcoming year to ESOG’s President and Executive Board or, in this case, to the volunteers on the sub-committee working on reform. This allowed those working on a particular project or policy reform to operate with substantial autonomy. The limited formal contact between leaders and membership also reduced potential obstacles for ESOG leaders’ public contributions to reform. Further, ESOG’s organizational youth meant that it lacked precedents prescribing or proscribing particular types of policy activity. Previous ESOG policy work had not been on socially contentious issues such as abortion, nor had it been on high profile legislative reform. Given the absence of such precedents, ESOG leaders had more room to contribute actively and publicly to policymaking. Finally, ESOG was also relatively less susceptible to donor pressure, as it had few external sources of financial support prior to 2006 when the final Ministry of Health regulation on abortion was issued ( ). Significantly, ESOG had no direct U.S. government funding until after the reform. During Ethiopia’s reform period, U.S. international reproductive health assistance was governed by the Mexico City policy (known also as the ‘Global Gag Rule’) barring non-U.S. organizations receiving U.S. assistance from providing abortion care services or referrals or speaking publicly on abortion legal reform, even if financed by other sources ( ; ). There was the Gag Rule and so on at that time. Even ESOG didn’t have, didn’t get a lot of support in terms of projects at that time, because ESOG was considered as an institution that promoted abortion. At that time, we were having very much difficulty in getting projects and so on, research support. Because USAID and everybody considered it as an abortion organization. Interview 12 (Ob-gyn) By involving the organization in policy on abortion during a U.S. Republican administration (2000–08), ESOG’s leaders knew they were forgoing potential U.S. government funding. Thus ESOG’s policy work occurred counter to its financial interests. Virtually all interviewees described a political environment favourable to reform. In particular, the government’s well-known ideological commitment to improving women’s status and enactment of progressive policies that were foundations for later reform, a moment of relative democratic opening for civil society advocacy, and the active support from other advocates, all reduced the risks of participating for ESOG. Ethiopia’s reform took place under the aegis of the current political regime, the Ethiopian People’s Revolutionary Democratic Front (EPRDF), a coalition of groups led by the Marxist Tigray People’s Liberation Front (TPLF). The ruling party’s history of policies enacted to improve women’s status, of proud secularism ( ; ), and specific senior leadership guidance to advance liberalization of the law ( ; ), created a political environment friendly to reform. The government has a good and strong position on promoting women’s rights, starting right from the bush. There were many women soldiers involved in the fighting. Interview 44 (Ob-gyn) The government came to power in 1991 with a record and a clear agenda to advance progressive policies related to women. Their swift adoption of a progressive ), a ), and particularly the new Constitution (1994), made it clear there would be a complete overhaul of the 1957 Penal Code, with an explicit eye for expanding the ‘democratic rights and freedoms’ of women, as well as reducing the ‘grave injuries and sufferings caused to women and children by reason of harmful traditional practices’ (Constitution, Article 35, Rights of Women ). ESOG leaders and other reform supporters all alluded to the Constitution and allied policies as explaining why reform of the law on abortion later took place. The Constitution was very specific on the importance of protecting and promoting women’s rights, and against harmful traditional practices. With a favourable Constitution, it is very likely that there would be reforms aligning policy with the Constitution. Interview 22 (Ob-gyn) However, although interviewees saw the TPLF’s progressive ideology and track record as promising for reform of existing laws related to women, including abortion, they noted that high-level political leaders did not want to take action without public demonstration of support. The government saw it as the responsibility of civil society to educate the public and galvanize support, and permitted civil society policy advocacy insofar as it was ‘harmonized’ with the government’s developmental agenda ( ). This approach to policy change grew from its experience as an insurgency movement in rural Tigray fostering social change ( ; ; ). Ok, you bring it forward, he [the government] is not going to cook it and finish everything and give it to you. But you have to bring it up. You need initiators to take it up to the top of the ladder, to make sure it'll be blessed by them. It’s progressive, they have been for it, and they were for it, and they encouraged the movement to take its course. They facilitated everything, took it up to the Parliament and had it passed through. Who else could do it for you, unless you have got a government that is very cooperative and progressive? Interview 33 (Ob-gyn) The government formally channelled civil society participation, including that of ESOG, through the Reproductive Health Working Group (RHWG). It convened the RHWG in 2002 to educate the public about maternal mortality and unsafe abortion, to mobilize support for reform, to offer technical assistance to government leaders, and later to develop the law’s regulations ( ). Members of the RHWG, including ESOG representatives, presented in public forums, on the radio, and at national and regional government hearings on the proposed Penal Code reforms. ESOG, and NGO leadership generally, saw a government expectation of reform, and of civil society contribution to this reform. Reform supporters also made use of the historic moment of relative democratic openness in Ethiopia’s political system prior to a 2005 crackdown ( ). Informants saw the period as one of openness and policy possibility. Yes, that period was a very good time for civil society, for the blooming of civil society, for creation of new NGOs and everything. I think it was very conducive for bringing about new ideas and for supporting family planning and a lot of different issues. So in terms of the environment, not in terms of the issue, it was more conducive than what we have now. Because it has a lot of space. Interview 25 (Women’s rights leader) In addition, the decision to overhaul the entire Penal Code provided the structured opportunity for advocates to push for inclusion of reform of the law on abortion. As noted above, during the Penal Code reform, the national government left open space for NGOs to present their experience, research, and perspectives; and to inform policymakers and the elite public. ESOG’s reform collaborators saw ESOG and obstetrician-gynaecologists as essential contributors to policy discussions—‘their absence would have been fatal’ ( Interview 6 , Ethiopian International NGO leader)—further smoothing the way for ESOG participation. Civil society support for reform sprang from two sources. The first was women’s organizations, most notably the Ethiopian Women Lawyers’ Association (EWLA), championing a broad set of reforms to improve women’s social and economic status ( ; ). The second was a network of reproductive health NGOs seeking to limit maternal mortality, particularly that due to unsafe abortion, and to expand women’s access to reproductive health services. ESOG’s involvement was with this second group. Reform allies ensured that ESOG leaders were prominent if not lead speakers at public events. While interviewees didn’t necessarily point to ESOG as the most proactive reform supporter, they uniformly attested to the high value of ESOG’s participation. It’s true. We want them [obstetrician-gynaecologists] on the forefront because our people look up to them on these issues. And also, you know, one voice of an ob-gyn is worth a million of ours. And it was so important to make sure that we are working with them. Interview 6 (NGO leader) Other advocates emphasized the special value of ESOG and ob–gyns as messengers for reform. They brought credibility to their policy advocacy, particularly in the eyes of more traditional representatives in Parliament and the regional governments. One women’s rights advocate emphasized not only the complementarity of the strategies of women’s advocates and those of ESOG, but also the special value of testimony from male ob–gyns, particularly with more conservative people. What really complemented our story is the involvement of the obstetrician-gynaecologists, because they are mostly men. There are women, but the majority of them are men. It is not westernized women shouting and saying ‘you have to do this and this’ on television and camera. It was these doctors, men, saying that on camera and in the meetings. […] So they brought the numbers; we brought the stories. Those complemented each other very well in Parliament, for dealing with religious groups, and for other reluctant people. Interview 25 (Women’s rights group leader) Most advocates saw ESOG member involvement as critical to legitimizing and detoxifying public discussion of unsafe abortion as a policy problem. This study documents what underpinned the active involvement of a medical society in supporting the 2005 reform of Ethiopia’s Penal Code with respect to abortion, counter to theoretical and empirical expectations of self-interested behaviour. Drawing on their own and others’ research to highlight evidence of the problem and the need for change, ESOG communicated with policy makers and the informed public. They opened and legitimized public discussion of maternal mortality and unsafe abortion by framing it as an issue of saving lives. In their own view, as well as that of other reform supporters, the high social status of the well educated and mostly male ob–gyns, as well as how they talked about abortion, made them more palatable messengers on the sensitive issues of maternal mortality, unsafe abortion, and legal reform, particularly for older and more traditional leaders. Although this research cannot provide a definitive answer to why ESOG was able to engage in contentious policymaking, it offers some promising explanations. It finds that an explicit organizational mission focused on maternal mortality prevention, combined with relaxed organizational and political constraints to participation, left a larger space for ESOG’s policy engagement. In particular, ESOG’s social mission, its young and lean organizational structure, and the lack of organizational precedents or funding constraints to bar action; a government receptive to progressive reform and to civil society contributions; a moment of historical opening; and the high value other advocates placed on ESOG contributions, all facilitated ESOG’s participation in controversial policymaking. While Ethiopia’s government exerts tight control over civil society policy advocacy ( ), this does not negate the lesson that ob–gyn societies elsewhere can capitalize on political openings. Abortion law reform was not a first level priority of the government, and without activism by civil society actors (albeit carefully government sanctioned activism), the reform would likely not have taken place. Activism necessarily here took the form of collaborative policy advocacy, given the requirements of Ethiopia’s political regime. Further, there does appear to be more latitude for more recently formed and mission-driven medical societies such as ESOG to contribute to reform efforts, as they may not confront the internal constraints faced by more established medical societies. In sum, this case does not lead us to revise or reject existing theories about the conditions deterring medical associations from participating in controversial reforms. However, counter to theoretical predictions from the professions literature about protecting professional prerogatives from other medical cadres and clinical autonomy with respect to clients, ESOG neither opposed expanding the scope of practice of midwives, health officers and clinical nurses to include provision of abortion services, nor insisted that medical necessity be the condition for women’s eligibility for services (i.e. therapeutic abortion), but supported a law that de facto enables women to decide whether they are eligible for services. It may be that ESOG and medical societies in contexts with acute provider shortages are not as preoccupied as those in more affluent contexts with preserving clinical discretion and preserving scope of practice vis-à-vis other medical cadres. By examining how an ob–gyn society contributed to reform in Ethiopia, this research can help advance understanding of the circumstances in which professional medical societies in the region can advance socially contentious yet life-saving policy reforms. Ethiopia has features common to other countries in SSA: elevated levels of maternal mortality including that due to unsafe abortion, weak health infrastructure, chronic shortages of ob–gyns and other medical professionals, and young ob–gyn societies. ESOG’s successful engagement in reform suggests that even with contentious policies, medical societies can be politically relevant in part due to their scientific credibility and technical expertise, as well as their organizational commitment to maternal mortality reduction. Further, motivated leadership can take advantage of political openings to help advance policy benefiting the nation as a whole, rather than the narrow material interests of the profession. Civil society partners can also amplify medical society impacts by strategically facilitating their participation in targeted activities. Future comparative analysis could help assess whether ob–gyn societies in SSA, with their relatively tighter ties to government and often regular contributions to national administrative policymaking and training in reproductive health, are likely to be more ready to contribute to reform than is the case in other regions.
Cardiomyocyte Foxp1‐Specific Deletion Promotes Post‐injury Heart Regeneration via Targeting Usp20‐HIF1ɑ‐Hand1 Signaling Pathway
d0cf3af1-a07c-4d6b-b728-e4cb7418cc42
11948019
Cardiovascular System[mh]
Introduction Cardiovascular diseases are the leading cause of death worldwide, whilst the definitive treatment remains a major unmet clinical need. It has been well documented that the adult mammalian heart has limited regenerative capacity and large injuries of the adult heart lead to loss of cardiomyocytes (CMs), which is often followed by scar formation, resulting in a progressive deterioration of the heart contractile capacity, ultimately leading to end‐stage heart failure. Given the lack of therapeutic approaches to reverse the loss of functional myocardium, the development of efficient and safe regenerative procedures represents an urgent need in the field of modern cardiovascular research. Emerging evidence supports the notion that new CMs are continuously born in the adult mammalian heart in homeostasis, although controversial published data regarding the possible sources and generation rates of these cells are a matter of scientific debate. Recently, it has become clear that the mammalian heart is not a post‐mitotic organ. In fact, newborn mammals, such as one‐day‐old pigs and mice, possess the capacity to completely regenerate the myocardium through notable levels of CMs proliferation, although this mechanism is markedly diminished after the first week of life. Myocardial infarction (MI) border zone proliferating CMs were recently implicated as a source of heart regeneration in both zebrafish and mammals. These CMs have very distinct transcriptomes, with reduced mitochondrial gene expression, reduced mitochondrial activity, increased glycolytic gene expression, and increased glucose uptake that resemble the proliferating embryonic CMs. Signaling networks that drive embryonic heart development may control specific mechanisms of postnatal heart regeneration. Therefore, one promising approach to improve the prognosis of heart failure is to extend this endogenous embryonic and neonatal cardiac regenerative capability to adult hearts. Understanding these mechanisms is of crucial importance for the development of novel therapeutic targets for CMs proliferation to facilitate cardiac regeneration and repair. Forkhead box P1 (Foxp1), a large modular transcription repressor that binds to DNA via its highly conserved DNA‐binding domain, encodes a transcription factor important for the early development of many organ systems. Myocardial‐specific loss of Foxp1 during embryonic heart development significantly increased CMs proliferation in mice, and haploinsufficiency of FOXP1 was associated with human congenital heart defects (CHDs) with the orthologous murine Foxp1 p.Pro596Ser mutant protein displaying elevated NK2 homeobox 5 (Nkx2.5) expression in cardiomyoblasts to promote proliferation. However, whether Foxp1 plays an important role in the adult heart in response to injury and how Foxp1 impacts cardiac regeneration and repair, remains elusive. Here, we demonstrated that Foxp1 regulates CMs proliferation following neonatal apical resection (AR) and adult MI to control heart regeneration and repair. We identified the ubiquitin specific peptidase 20 (Usp20), which de‐ubiquitinates hypoxia‐inducible factor 1ɑ (HIF1ɑ) and prevents it from proteasomal degradation, as Foxp1 direct target gene. Deletion of Foxp1 in CMs awakes a dormant regenerative program in neonatal and adult mice via a HIF1α‐dependent control of heart and neural crest derivatives expressed 1 (Hand1) expression that, in turn, regulates the transition from fatty acid (FA) oxidation to glycolysis in CMs. Finally, a cardiac type troponin T2 (cTnT)‐promoter‐driven Hand1‐adeno‐associated virus 9 (AAV9) significantly attenuated the Foxp1 gain‐of‐function in CMs which triggered a reduction of CMs proliferation and thus suppressed impaired cardiac regeneration, repair, and cardiac function. Altogether, we propose that the potential of endogenous regenerative properties in adult hearts by the Foxp1‐Usp20‐HIF1ɑ‐Hand1 signaling pathway might represent a future promising therapeutic approach for heart failure. Results 2.1 Loss of Cardiomyocyte‐Foxp1 Increases Cell Proliferation and Promotes Heart Regeneration in the Neonatal Apical Resection Model Recent single cell (sc)‐RNA sequencing of murine neonatal myocardial infarction (MI) hearts revealed crucial regulatory networks governing heart regeneration. Therefore we analyzed the data with accession No. GSE130699. This single nucleus RNA‐sequencing dataset identified a unique immature CMs population enriched with genes of the immature heart ( Tnni , Myh7 , and Actc1 ), cell‐cycle progression ( Aurka , Ccnb1 , Ki67 , Cdk1 , and Cdk4 ), anti‐oxidant genes ( Prdx1 , Sod1 , Sod2 , and Cat ), glycolysis ( Tpi1 and Aldoa ) and reduced maturation genes ( Myh6 , Ryr2 and Cacna1c ). Importantly, the high regenerative capacity immature neonatal CMs population showed a significantly reduced Foxp1 expression compared with other mature CMs populations ( Figure ). Furthermore, we confirmed significantly reduced Foxp1 expression in neonatal apical resection (AR) hearts (Figure ), and immunostaining demonstrated reduced expression mainly in border zone CMs (Figure ). These results suggest a very likely pivotal role for Foxp1 in CMs in the regulation of cardiac regeneration and repair after injury. To determine the Foxp1 effect in vivo, we generated mice with specific deletion of Foxp1 in CMs (Foxp1 CMKO ). Tamoxifen administration was performed from postnatal day 0 to day 3 (Figure ). Foxp1 expression was specifically lost in CMs of Foxp1 CMKO mutant mice as validated by reverse transcription quantitative real‐time polymerase chain reaction (RT‐qPCR), western blot, and immunostaining (Figure , Supporting Information). Interestingly, neither appreciable changes in heart size or gross morphology (Figure , Supporting Information), nor cardiac function by echocardiography (Table , Supporting Information) was observed between Foxp1 CMKO mice and wild‐type littermates up to 12 weeks of age. Next, we examined the heart after AR at postnatal 3 days (P3AR). Foxp1 CMKO mice exhibited significantly less scar tissue formation in post‐AR 7 days (P3ARd7) and 21 days (P3ARd21) (Figure ) as well as smaller border zone CMs (Figure ) compared with wild‐type littermates, indicating that Foxp1 CMKO might have increased cardiac regenerative capacity. We then examined CMs proliferation at P3ARd7 and Foxp1 CMKO exhibited significantly elevated CMs proliferation compared with wild‐type littermates as quantified by 5‐ethynyl‐2′deoxyuridine (5‐EdU) incorporation (>3.5 fold) (Figure ), active cell cycle phases (G1, S, G2, and M) marker Ki67 (>3 fold) (Figure ), karyokinesis marker phosphorylated histone H3 (PH3) (>3 fold) (Figure ) and mitosis marker Aurora B kinase in the cleavage furrow (>2.5 fold) (Figure ) co‐stained with the specific CMs marker, α‐sarcomeric actinin (α‐SA). For better visualization of the cardiomyocytes, we co‐stained these proliferation markers with cardiac nuclear specific pericentriolar material 1 (Pcm1), which showed similar elevated proliferating CMs in Foxp1 CMKO compared with wildtype littermates (Figure , Supporting Information). Concomitant with the in vivo study, Foxp1 knockdown in cultured neonatal mouse cardiomyocytes (NMCMs) (Figure , Supporting Information) triggered increased cell proliferation, as evidenced by 5‐EdU incorporation, PH3, and Aurora B staining (Figure , Supporting Information). This was further supported by the upregulation of cell cycle genes (Figure , Supporting Information). Altogether, these results indicated that ablation of Foxp1 in CMs results in a significant increase of neonatal CMs proliferation leading to enhanced cardiac regeneration capacity, thereby contributing to improved post‐injury tissue repair with less scar formation in the neonatal AR model. 2.2 Loss of Cardiomyocyte‐Foxp1 Increases Cell Proliferation to Promote Heart Regeneration And Improve Cardiac Function in the Adult Myocardial Infarction Model We examined whether Foxp1 signaling plays a similar promoting effect in adult heart regeneration. Tamoxifen was administered to induce Foxp1 deletion in CMs in mice aged 8–12 weeks, and MI was performed by permanent ligation of the proximal left anterior descending coronary artery ( Figure ; Figure , Supporting Information). We found no significant difference in infarct size or area at risk between Foxp1 CMKO and wild‐type littermates 1‐day post‐MI (MId1) (Figure , Supporting Information). Echocardiography showed normal heart size and function in all sham‐operated animals; however, Foxp1 CMKO mice exhibited significantly improved cardiac function, evidenced by increased left ventricular ejection fraction (LVEF) and fractional shortening (FS) (Figure ; Table , Supporting Information), as well as reduced fibrotic scar size (Figure ) and border zone CMs size (Figure ) compared with wild‐type littermates. Moreover, we observed a gradually improved cardiac function and survival in Foxp1 CMKO mice by long‐term observation (Figure , Supporting Information). These results indicate that specific deletion of Foxp1 in CMs improves cardiac repair and function in the adult MI model. There is limited CM mitosis in adult hearts although sustained induction of CM cell cycle from birth by overexpression of cell cycle regulators has been achieved. However, cell cycle re‐entry in adult CMs appears restricted to a mono‐nucleated subpopulation. We isolated adult mouse CMs by collagenase digestion and found a significant increase in mononucleated CMs percentage with the decrease of bi‐nucleated CMs percentage at the border zone of MId7 hearts in Foxp1 CMKO mice (Figure , Supporting Information). Immunostaining of heart sections revealed increased mitotic CMs, detected by elevated EdU incorporation (>6 fold) (Figure ; Figure , Supporting Information), Ki67 + (>2.8 fold) (Figure ; Figure , Supporting Information), PH3 + (>2.5 fold) (Figure ; Figure , Supporting Information), and Aurora B + (>2 fold) (Figure ) at MId14 in Foxp1 CMKO compared with wild‐type littermates. To further validate whether Foxp1 deletion promotes cell proliferation in human cardiomyocytes, we used human induced pluripotent stem cell‐derived cardiomyocytes (iPSC‐CMs). Foxp1‐siRNA knockdown significantly promotes iPSC‐CMs proliferation, as shown by 5‐EdU incorporation and PH3 staining (Figure , Supporting Information). These results demonstrated that Foxp1 regulates adult CM mitotic nucleation status and cell proliferation, promoting a proliferative mononuclear CM population in Foxp1‐deletion mutant hearts. This mechanism contributes to enhancing heart regeneration and facilitates cardiac repair following MI in adults. 2.3 Foxp1 Gain‐Of‐Function In Cardiomyocytes Suppresses Proliferation To Impairs Heart Regeneration And Function In contrast to Foxp1 deletion in CMs, Foxp1 gain‐of‐function using transgenic Foxp1 CMTg mice (Figure , Supporting Information) triggered a statistically significant reduced capacity of heart regeneration in neonatal AR and adult MI models. Significantly reduced mitotic CMs in P3ARd7 hearts were observed by the decreased percentage of proliferating CMs in Foxp1 CMTg mice compared with wild‐type littermates ( Figure ). Further studies showed increased border zone CMs size (Figure ) and fibrotic scar size (Figure ) of P3ARd7 and P3ARd21 hearts in Foxp1 CMTg mice, suggesting that Foxp1‐induced expression in CMs worsens the regeneration capacity of neonatal AR hearts. Similarly reduced proliferation in CMs was observed at the border zone of adult MI hearts (Figure ), and greater fibrotic scar size (Figure ) as well as deterioration of cardiac function (Figure ; Table , Supporting Information) in Foxp1 CMTg compared with wild‐type littermates, suggesting that CM‐Foxp1 induced expression attenuates CM regenerative capability and deteriorates cardiac function in adult MI model. 2.4 HIF1ɑ Deletion in Cardiomyocytes Reverses the Foxp1‐Dependent Elevation Of Cell Proliferation in Heart Regeneration A study using HIF1ɑ oxygen‐dependent degradation domain fused Cre recombinase estrogen receptor T2 (Cre ERT2 ) mice and identified a rare population of hypoxic CMs that display neonatal features of proliferative CMs of smaller size, mononucleation, and less oxidative DNA damage. A later report confirmed that adult mouse CMs could induce this neonatal proliferation state and thus stimulate heart regeneration when exposed to gradual systemic hypoxia. In the present work, we found a significantly elevated expression of HIF1ɑ and its degradation inhibition de‐ubiquitinase, USP20 in P3AR CMs and adult MI border zone CMs in Foxp1 CMKO mice ( Figure ), implying that Foxp1 might regulate HIF1ɑ expression in order to induce CM proliferation for heart regeneration. HIF1α protein expression is usually regulated by the balance of von Hippel‐Lindau tumor suppressor (pVHL) ubiquitin E3 ligase for degradation and de‐ubiquitinase USP20 preventing degradation. Foxp1 is usually a transcription repressor by reducing gene expression via binding to the gene promoter region with the 5′‐TRTTKTY‐3′ sequence. Cleavage under targets and tagmentation (CUT&Tag) analysis showed that FOXP1 was enriched in promoter regions of Usp20 (Figure ) and sequence analysis elucidated FOXP1 binding sites in the promoter region of mouse Usp20 (Figure ). Chip‐qPCR showed an association of Foxp1 with the promoter of Usp20 (Figure ; Table , Supporting Information) and luciferase assay confirmed that Foxp1 expression vector could dose‐dependently repress the promoter of Usp20 (Figure ) containing FXOP1 binding sites in NIH‐3T3 cells. HIF1α ubiquitylation analysis by co‐immunoprecipitation showed that Foxp1‐siRNA knockdown significantly downregulates HIF1α ubiquitination and elevates HIF1α protein level as well, whereas Usp20‐siRNA knockdown reversed Foxp1‐siRNA induced HIF1α de‐ubiquitination and elevated protein levels (Figure ). These results overall suggest that Foxp1 regulates the de‐ubiquitinase USP20 to maintain HIF1ɑ stability and expression in CMs. To confirm the regulation of Foxp1 in CMs in HIF1ɑ signaling and its role in CM proliferation for heart regeneration, we generated mice with specific double deletion of HIF1ɑ and Foxp1 specifically in CMs (HIF1ɑ CMKO ; Foxp1 CMKO ). As expected, HIF1ɑ deletion in CMs (Figure , Supporting Information) significantly reversed the reduced fibrotic scar size (Figure ) and CM size at the AR border zone (Figure ) in Foxp1 CMKO mice, with no appreciable changes in heart weight/body weight and cardiac function (Table , Supporting Information). Further examination showed a similar significant reversal of elevated percentages of proliferating CMs (Figure ) in Foxp1 CMKO mice. Consistent with the in vivo study, we found that HIF1ɑ knockdown (Figure , Supporting Information) reversed the elevated cell proliferation in Foxp1‐knockdown NMCMs (Figure , Supporting Information). Taken together, these studies demonstrated that Foxp1 regulates HIF1ɑ signaling and influences CM proliferation contributing to cardiac regeneration and repair after injury. 2.5 Cardiomyocyte‐derived Foxp1‐HIF1ɑ Further Regulates Hand1, a Target Gene that Controls Metabolic Transition in Cell Proliferation and Heart Regeneration The metabolic transition from glycolysis to lipid oxidation occurs soon after birth and coincides with CM cell cycle arrest and regenerative capacity loss. Moreover, the transition from fatty acid (FA) oxidation back to glycolysis is also important for CM proliferation and heart regeneration. To assess whether Foxp1 deletion in CMs mediates extensive proliferation of these cells in the AR heart model and is dependent on metabolic transition, we performed non‐targeted metabolomics. Principal component analysis (PCA) indicated a good separation of metabolite clusters between each group ( Figure ). Notably, the enrichment of glycolytic intermediates was observed, alongside the decline in the FA pathway intermediates in AR hearts compared with sham‐operated hearts (Figure ). Also, we found a significant upregulation of genes involved in glycolysis (Glut‐1, Hk‐2, Pdk‐1, Ldha) and downregulation of genes related to FA utilization and oxidation (Mlycd, Acsl1, Hsl, Ech1, Fabp3 and Hmgcs2) of p3ARd7 hearts (Figure ), suggesting that AR hearts have elevated glycolysis and reduced FA metabolism, and thus a metabolic transition facilitating CM proliferation for heart regeneration. Moreover, Foxp1 CMKO mice exhibited significantly elevated glycolysis and suppressed FA oxidation in p3ARd7 CMs compared with those of wild‐type littermates, while HIF1ɑ ablation in CMs significantly reversed the effect of Foxp1 deletion in CMs in increasing metabolic transition to glycolysis (Figure ). These results demonstrated a crucial role of the Foxp1‐HIF1ɑ signaling pathway in CMs in the regulation of metabolic transition responsible for cell proliferation and heart regeneration in the neonatal heart AR model. Basic helix‐loop‐helix transcription factor Hand1 is highly expressed in the fetal heart under direct control of HIF1ɑ. Previous studies have shown that HIF1ɑ regulates Hand1, leading to the inhibition of lipid metabolism and promotion of glycolysis, eventually resulting in increased CM proliferation. In this study, we identified multiple HIF1ɑ binding sites within the proximal promoter region of Hand1 by sequence analysis ( Figure ). ChIP‐qPCR indicated that HIF1ɑ directly binds to the Hand1 promoter region in NMCMs under hypoxic conductions (Figure ). Luciferase reporter assay showed that the proximal Hand1 promoter containing HIF1ɑ binding sites was activated under reduced oxygen levels. This activation was further enhanced by Foxp1 knockdown and significantly reversed upon HIF1ɑ‐siRNA knockdown (Figure ). Additionally, a significantly elevated expression of Hand1 in P3ARd3 CMs of Foxp1 CMKO mice (Figure ), while reduction in P3ARd3 CMs of HIF1ɑ CMKO mice (Figure ). HAND1 expression levels varied in parallel with reduced FOXP1 and elevated HIF1ɑ expression levels in border zone CMs of Foxp1 CMKO mice (Figure ). Concomitantly, siRNA knockdown of Foxp1 in cultured NMCMs induced a significant increase in Hand1 expression (Figure ) while HIF1ɑ knockdown induced decreased Hand1 expression (Figure ). To further define the function of the Foxp1‐HIF1ɑ‐Hand1 signal pathway for metabolic transition, Hand1‐siRNA knockdown was performed in NMCMs (Figure ). Hand1 depletion in Foxp1 knockdown NMCMs significantly reversed the upregulation of glycolysis‐related genes and downregulation of lipid metabolizing genes (Figure ) contributing to the reversal of elevated proliferation (Figure ). Moreover, the glucose consumption and lactate production assays demonstrated that glycolysis was more active when Foxp1 was knocked down in NMCMs. Furthermore, Hand1 or HIF1ɑ knockdown in NMCMs significantly reversed the glycolysis activation (Figure , Supporting Information) and thus reduced cell proliferation. Collectively, these results demonstrated the major contribution of metabolic transition regulated by the Foxp1‐HIF1ɑ‐Hand1 signal pathway for CM proliferation and, in turn, for heart regeneration. 2.6 Cardiomyocytes‐Targeted Delivery of Hand1 Promoted Glycolytic Metabolic Transition for Raising Cell Proliferation and Improving Heart Regeneration and Cardiac Dysfunction Recovery in post‐MI Animals with Foxp1 Gain‐Of‐Function in Cardiomyocytes So far, we demonstrated i) the Foxp1‐HIF1ɑ‐Hand1 signaling pathway in CMs is important for the metabolic transition to glycolysis, cell proliferation, and heart regeneration, and ii) deletion of Foxp1 in CMs increased HIF1ɑ‐Hand1 expression, enhancing metabolic transition, and iii) induced expression of Foxp1 in CMs inhibits cell proliferation and heart regeneration, leading to cardiac dysfunction in post‐MI hearts. In order to evaluate the potential therapeutic value of Hand1 for cardiac regeneration, we developed a cTnT promoter‐driven AAV9 vector for Hand1 targeted delivery to CMs, in order to achieve induced expression (Figure , Supporting Information). This induced expression of Hand1 in CMs significantly reversed reduced cell proliferation in cultured Foxp1 gain‐of‐function CMs (Figure , Supporting Information). Furthermore, we confirmed that Hand1 overexpression induced by AAV9 in human iPSC‐CMs also promotes cell proliferation (Figure , Supporting Information). We further examined Hand1 expression in the hearts and confirmed significant upregulation in isolated CMs at the MI border zone compared with CMs at remote zones or CMs from sham‐operated left ventricles (Figure ). In addition, increased Hand1 expression in ventricle CMs of Foxp1 CMKO mice was observed (Figure ). To further evaluate the in vivo heart regeneration improvement of CM‐targeted delivery of Hand1 in adult hearts, we injected cTnT‐AAV9‐Hand1 into peri‐infarct myocardial areas of post‐MI Foxp1 CMTg mice and wild‐type littermates. High efficacy of induced Hand1 expression in CMs was observed in mice at 14 days after injection ( Figure ; Figure , Supporting Information). Overexpression of Hand1 in CMs ameliorates the fibrotic scar size (Figure ) and improves cardiac function (Figure ; Table , Supporting Information) at MId28 to a higher degree in Foxp1 CMTg mutants than in wild‐type hearts. Finally, overexpression of Hand1 in CMs had improved the MI border zone CMs proliferation (Figure ) and ameliorated CMs hypertrophy (Figure ) to a higher degree in Foxp1 CMTg mutants than in wild‐type hearts. These results demonstrated the importance of the Foxp‐HIF1ɑ‐Hand1 signaling pathway in the regulation of metabolic transition in CM proliferation and heart regeneration. Therapeutic targeting of this pathway may open new possibilities for the treatment of heart failure associated with insufficient heart regeneration following injury. Loss of Cardiomyocyte‐Foxp1 Increases Cell Proliferation and Promotes Heart Regeneration in the Neonatal Apical Resection Model Recent single cell (sc)‐RNA sequencing of murine neonatal myocardial infarction (MI) hearts revealed crucial regulatory networks governing heart regeneration. Therefore we analyzed the data with accession No. GSE130699. This single nucleus RNA‐sequencing dataset identified a unique immature CMs population enriched with genes of the immature heart ( Tnni , Myh7 , and Actc1 ), cell‐cycle progression ( Aurka , Ccnb1 , Ki67 , Cdk1 , and Cdk4 ), anti‐oxidant genes ( Prdx1 , Sod1 , Sod2 , and Cat ), glycolysis ( Tpi1 and Aldoa ) and reduced maturation genes ( Myh6 , Ryr2 and Cacna1c ). Importantly, the high regenerative capacity immature neonatal CMs population showed a significantly reduced Foxp1 expression compared with other mature CMs populations ( Figure ). Furthermore, we confirmed significantly reduced Foxp1 expression in neonatal apical resection (AR) hearts (Figure ), and immunostaining demonstrated reduced expression mainly in border zone CMs (Figure ). These results suggest a very likely pivotal role for Foxp1 in CMs in the regulation of cardiac regeneration and repair after injury. To determine the Foxp1 effect in vivo, we generated mice with specific deletion of Foxp1 in CMs (Foxp1 CMKO ). Tamoxifen administration was performed from postnatal day 0 to day 3 (Figure ). Foxp1 expression was specifically lost in CMs of Foxp1 CMKO mutant mice as validated by reverse transcription quantitative real‐time polymerase chain reaction (RT‐qPCR), western blot, and immunostaining (Figure , Supporting Information). Interestingly, neither appreciable changes in heart size or gross morphology (Figure , Supporting Information), nor cardiac function by echocardiography (Table , Supporting Information) was observed between Foxp1 CMKO mice and wild‐type littermates up to 12 weeks of age. Next, we examined the heart after AR at postnatal 3 days (P3AR). Foxp1 CMKO mice exhibited significantly less scar tissue formation in post‐AR 7 days (P3ARd7) and 21 days (P3ARd21) (Figure ) as well as smaller border zone CMs (Figure ) compared with wild‐type littermates, indicating that Foxp1 CMKO might have increased cardiac regenerative capacity. We then examined CMs proliferation at P3ARd7 and Foxp1 CMKO exhibited significantly elevated CMs proliferation compared with wild‐type littermates as quantified by 5‐ethynyl‐2′deoxyuridine (5‐EdU) incorporation (>3.5 fold) (Figure ), active cell cycle phases (G1, S, G2, and M) marker Ki67 (>3 fold) (Figure ), karyokinesis marker phosphorylated histone H3 (PH3) (>3 fold) (Figure ) and mitosis marker Aurora B kinase in the cleavage furrow (>2.5 fold) (Figure ) co‐stained with the specific CMs marker, α‐sarcomeric actinin (α‐SA). For better visualization of the cardiomyocytes, we co‐stained these proliferation markers with cardiac nuclear specific pericentriolar material 1 (Pcm1), which showed similar elevated proliferating CMs in Foxp1 CMKO compared with wildtype littermates (Figure , Supporting Information). Concomitant with the in vivo study, Foxp1 knockdown in cultured neonatal mouse cardiomyocytes (NMCMs) (Figure , Supporting Information) triggered increased cell proliferation, as evidenced by 5‐EdU incorporation, PH3, and Aurora B staining (Figure , Supporting Information). This was further supported by the upregulation of cell cycle genes (Figure , Supporting Information). Altogether, these results indicated that ablation of Foxp1 in CMs results in a significant increase of neonatal CMs proliferation leading to enhanced cardiac regeneration capacity, thereby contributing to improved post‐injury tissue repair with less scar formation in the neonatal AR model. Loss of Cardiomyocyte‐Foxp1 Increases Cell Proliferation to Promote Heart Regeneration And Improve Cardiac Function in the Adult Myocardial Infarction Model We examined whether Foxp1 signaling plays a similar promoting effect in adult heart regeneration. Tamoxifen was administered to induce Foxp1 deletion in CMs in mice aged 8–12 weeks, and MI was performed by permanent ligation of the proximal left anterior descending coronary artery ( Figure ; Figure , Supporting Information). We found no significant difference in infarct size or area at risk between Foxp1 CMKO and wild‐type littermates 1‐day post‐MI (MId1) (Figure , Supporting Information). Echocardiography showed normal heart size and function in all sham‐operated animals; however, Foxp1 CMKO mice exhibited significantly improved cardiac function, evidenced by increased left ventricular ejection fraction (LVEF) and fractional shortening (FS) (Figure ; Table , Supporting Information), as well as reduced fibrotic scar size (Figure ) and border zone CMs size (Figure ) compared with wild‐type littermates. Moreover, we observed a gradually improved cardiac function and survival in Foxp1 CMKO mice by long‐term observation (Figure , Supporting Information). These results indicate that specific deletion of Foxp1 in CMs improves cardiac repair and function in the adult MI model. There is limited CM mitosis in adult hearts although sustained induction of CM cell cycle from birth by overexpression of cell cycle regulators has been achieved. However, cell cycle re‐entry in adult CMs appears restricted to a mono‐nucleated subpopulation. We isolated adult mouse CMs by collagenase digestion and found a significant increase in mononucleated CMs percentage with the decrease of bi‐nucleated CMs percentage at the border zone of MId7 hearts in Foxp1 CMKO mice (Figure , Supporting Information). Immunostaining of heart sections revealed increased mitotic CMs, detected by elevated EdU incorporation (>6 fold) (Figure ; Figure , Supporting Information), Ki67 + (>2.8 fold) (Figure ; Figure , Supporting Information), PH3 + (>2.5 fold) (Figure ; Figure , Supporting Information), and Aurora B + (>2 fold) (Figure ) at MId14 in Foxp1 CMKO compared with wild‐type littermates. To further validate whether Foxp1 deletion promotes cell proliferation in human cardiomyocytes, we used human induced pluripotent stem cell‐derived cardiomyocytes (iPSC‐CMs). Foxp1‐siRNA knockdown significantly promotes iPSC‐CMs proliferation, as shown by 5‐EdU incorporation and PH3 staining (Figure , Supporting Information). These results demonstrated that Foxp1 regulates adult CM mitotic nucleation status and cell proliferation, promoting a proliferative mononuclear CM population in Foxp1‐deletion mutant hearts. This mechanism contributes to enhancing heart regeneration and facilitates cardiac repair following MI in adults. Foxp1 Gain‐Of‐Function In Cardiomyocytes Suppresses Proliferation To Impairs Heart Regeneration And Function In contrast to Foxp1 deletion in CMs, Foxp1 gain‐of‐function using transgenic Foxp1 CMTg mice (Figure , Supporting Information) triggered a statistically significant reduced capacity of heart regeneration in neonatal AR and adult MI models. Significantly reduced mitotic CMs in P3ARd7 hearts were observed by the decreased percentage of proliferating CMs in Foxp1 CMTg mice compared with wild‐type littermates ( Figure ). Further studies showed increased border zone CMs size (Figure ) and fibrotic scar size (Figure ) of P3ARd7 and P3ARd21 hearts in Foxp1 CMTg mice, suggesting that Foxp1‐induced expression in CMs worsens the regeneration capacity of neonatal AR hearts. Similarly reduced proliferation in CMs was observed at the border zone of adult MI hearts (Figure ), and greater fibrotic scar size (Figure ) as well as deterioration of cardiac function (Figure ; Table , Supporting Information) in Foxp1 CMTg compared with wild‐type littermates, suggesting that CM‐Foxp1 induced expression attenuates CM regenerative capability and deteriorates cardiac function in adult MI model. HIF1ɑ Deletion in Cardiomyocytes Reverses the Foxp1‐Dependent Elevation Of Cell Proliferation in Heart Regeneration A study using HIF1ɑ oxygen‐dependent degradation domain fused Cre recombinase estrogen receptor T2 (Cre ERT2 ) mice and identified a rare population of hypoxic CMs that display neonatal features of proliferative CMs of smaller size, mononucleation, and less oxidative DNA damage. A later report confirmed that adult mouse CMs could induce this neonatal proliferation state and thus stimulate heart regeneration when exposed to gradual systemic hypoxia. In the present work, we found a significantly elevated expression of HIF1ɑ and its degradation inhibition de‐ubiquitinase, USP20 in P3AR CMs and adult MI border zone CMs in Foxp1 CMKO mice ( Figure ), implying that Foxp1 might regulate HIF1ɑ expression in order to induce CM proliferation for heart regeneration. HIF1α protein expression is usually regulated by the balance of von Hippel‐Lindau tumor suppressor (pVHL) ubiquitin E3 ligase for degradation and de‐ubiquitinase USP20 preventing degradation. Foxp1 is usually a transcription repressor by reducing gene expression via binding to the gene promoter region with the 5′‐TRTTKTY‐3′ sequence. Cleavage under targets and tagmentation (CUT&Tag) analysis showed that FOXP1 was enriched in promoter regions of Usp20 (Figure ) and sequence analysis elucidated FOXP1 binding sites in the promoter region of mouse Usp20 (Figure ). Chip‐qPCR showed an association of Foxp1 with the promoter of Usp20 (Figure ; Table , Supporting Information) and luciferase assay confirmed that Foxp1 expression vector could dose‐dependently repress the promoter of Usp20 (Figure ) containing FXOP1 binding sites in NIH‐3T3 cells. HIF1α ubiquitylation analysis by co‐immunoprecipitation showed that Foxp1‐siRNA knockdown significantly downregulates HIF1α ubiquitination and elevates HIF1α protein level as well, whereas Usp20‐siRNA knockdown reversed Foxp1‐siRNA induced HIF1α de‐ubiquitination and elevated protein levels (Figure ). These results overall suggest that Foxp1 regulates the de‐ubiquitinase USP20 to maintain HIF1ɑ stability and expression in CMs. To confirm the regulation of Foxp1 in CMs in HIF1ɑ signaling and its role in CM proliferation for heart regeneration, we generated mice with specific double deletion of HIF1ɑ and Foxp1 specifically in CMs (HIF1ɑ CMKO ; Foxp1 CMKO ). As expected, HIF1ɑ deletion in CMs (Figure , Supporting Information) significantly reversed the reduced fibrotic scar size (Figure ) and CM size at the AR border zone (Figure ) in Foxp1 CMKO mice, with no appreciable changes in heart weight/body weight and cardiac function (Table , Supporting Information). Further examination showed a similar significant reversal of elevated percentages of proliferating CMs (Figure ) in Foxp1 CMKO mice. Consistent with the in vivo study, we found that HIF1ɑ knockdown (Figure , Supporting Information) reversed the elevated cell proliferation in Foxp1‐knockdown NMCMs (Figure , Supporting Information). Taken together, these studies demonstrated that Foxp1 regulates HIF1ɑ signaling and influences CM proliferation contributing to cardiac regeneration and repair after injury. Cardiomyocyte‐derived Foxp1‐HIF1ɑ Further Regulates Hand1, a Target Gene that Controls Metabolic Transition in Cell Proliferation and Heart Regeneration The metabolic transition from glycolysis to lipid oxidation occurs soon after birth and coincides with CM cell cycle arrest and regenerative capacity loss. Moreover, the transition from fatty acid (FA) oxidation back to glycolysis is also important for CM proliferation and heart regeneration. To assess whether Foxp1 deletion in CMs mediates extensive proliferation of these cells in the AR heart model and is dependent on metabolic transition, we performed non‐targeted metabolomics. Principal component analysis (PCA) indicated a good separation of metabolite clusters between each group ( Figure ). Notably, the enrichment of glycolytic intermediates was observed, alongside the decline in the FA pathway intermediates in AR hearts compared with sham‐operated hearts (Figure ). Also, we found a significant upregulation of genes involved in glycolysis (Glut‐1, Hk‐2, Pdk‐1, Ldha) and downregulation of genes related to FA utilization and oxidation (Mlycd, Acsl1, Hsl, Ech1, Fabp3 and Hmgcs2) of p3ARd7 hearts (Figure ), suggesting that AR hearts have elevated glycolysis and reduced FA metabolism, and thus a metabolic transition facilitating CM proliferation for heart regeneration. Moreover, Foxp1 CMKO mice exhibited significantly elevated glycolysis and suppressed FA oxidation in p3ARd7 CMs compared with those of wild‐type littermates, while HIF1ɑ ablation in CMs significantly reversed the effect of Foxp1 deletion in CMs in increasing metabolic transition to glycolysis (Figure ). These results demonstrated a crucial role of the Foxp1‐HIF1ɑ signaling pathway in CMs in the regulation of metabolic transition responsible for cell proliferation and heart regeneration in the neonatal heart AR model. Basic helix‐loop‐helix transcription factor Hand1 is highly expressed in the fetal heart under direct control of HIF1ɑ. Previous studies have shown that HIF1ɑ regulates Hand1, leading to the inhibition of lipid metabolism and promotion of glycolysis, eventually resulting in increased CM proliferation. In this study, we identified multiple HIF1ɑ binding sites within the proximal promoter region of Hand1 by sequence analysis ( Figure ). ChIP‐qPCR indicated that HIF1ɑ directly binds to the Hand1 promoter region in NMCMs under hypoxic conductions (Figure ). Luciferase reporter assay showed that the proximal Hand1 promoter containing HIF1ɑ binding sites was activated under reduced oxygen levels. This activation was further enhanced by Foxp1 knockdown and significantly reversed upon HIF1ɑ‐siRNA knockdown (Figure ). Additionally, a significantly elevated expression of Hand1 in P3ARd3 CMs of Foxp1 CMKO mice (Figure ), while reduction in P3ARd3 CMs of HIF1ɑ CMKO mice (Figure ). HAND1 expression levels varied in parallel with reduced FOXP1 and elevated HIF1ɑ expression levels in border zone CMs of Foxp1 CMKO mice (Figure ). Concomitantly, siRNA knockdown of Foxp1 in cultured NMCMs induced a significant increase in Hand1 expression (Figure ) while HIF1ɑ knockdown induced decreased Hand1 expression (Figure ). To further define the function of the Foxp1‐HIF1ɑ‐Hand1 signal pathway for metabolic transition, Hand1‐siRNA knockdown was performed in NMCMs (Figure ). Hand1 depletion in Foxp1 knockdown NMCMs significantly reversed the upregulation of glycolysis‐related genes and downregulation of lipid metabolizing genes (Figure ) contributing to the reversal of elevated proliferation (Figure ). Moreover, the glucose consumption and lactate production assays demonstrated that glycolysis was more active when Foxp1 was knocked down in NMCMs. Furthermore, Hand1 or HIF1ɑ knockdown in NMCMs significantly reversed the glycolysis activation (Figure , Supporting Information) and thus reduced cell proliferation. Collectively, these results demonstrated the major contribution of metabolic transition regulated by the Foxp1‐HIF1ɑ‐Hand1 signal pathway for CM proliferation and, in turn, for heart regeneration. Cardiomyocytes‐Targeted Delivery of Hand1 Promoted Glycolytic Metabolic Transition for Raising Cell Proliferation and Improving Heart Regeneration and Cardiac Dysfunction Recovery in post‐MI Animals with Foxp1 Gain‐Of‐Function in Cardiomyocytes So far, we demonstrated i) the Foxp1‐HIF1ɑ‐Hand1 signaling pathway in CMs is important for the metabolic transition to glycolysis, cell proliferation, and heart regeneration, and ii) deletion of Foxp1 in CMs increased HIF1ɑ‐Hand1 expression, enhancing metabolic transition, and iii) induced expression of Foxp1 in CMs inhibits cell proliferation and heart regeneration, leading to cardiac dysfunction in post‐MI hearts. In order to evaluate the potential therapeutic value of Hand1 for cardiac regeneration, we developed a cTnT promoter‐driven AAV9 vector for Hand1 targeted delivery to CMs, in order to achieve induced expression (Figure , Supporting Information). This induced expression of Hand1 in CMs significantly reversed reduced cell proliferation in cultured Foxp1 gain‐of‐function CMs (Figure , Supporting Information). Furthermore, we confirmed that Hand1 overexpression induced by AAV9 in human iPSC‐CMs also promotes cell proliferation (Figure , Supporting Information). We further examined Hand1 expression in the hearts and confirmed significant upregulation in isolated CMs at the MI border zone compared with CMs at remote zones or CMs from sham‐operated left ventricles (Figure ). In addition, increased Hand1 expression in ventricle CMs of Foxp1 CMKO mice was observed (Figure ). To further evaluate the in vivo heart regeneration improvement of CM‐targeted delivery of Hand1 in adult hearts, we injected cTnT‐AAV9‐Hand1 into peri‐infarct myocardial areas of post‐MI Foxp1 CMTg mice and wild‐type littermates. High efficacy of induced Hand1 expression in CMs was observed in mice at 14 days after injection ( Figure ; Figure , Supporting Information). Overexpression of Hand1 in CMs ameliorates the fibrotic scar size (Figure ) and improves cardiac function (Figure ; Table , Supporting Information) at MId28 to a higher degree in Foxp1 CMTg mutants than in wild‐type hearts. Finally, overexpression of Hand1 in CMs had improved the MI border zone CMs proliferation (Figure ) and ameliorated CMs hypertrophy (Figure ) to a higher degree in Foxp1 CMTg mutants than in wild‐type hearts. These results demonstrated the importance of the Foxp‐HIF1ɑ‐Hand1 signaling pathway in the regulation of metabolic transition in CM proliferation and heart regeneration. Therapeutic targeting of this pathway may open new possibilities for the treatment of heart failure associated with insufficient heart regeneration following injury. Discussion In the current study, we analyzed the tissue and single‐cell transcriptome datasets of murine regenerative hearts and found a significant reduction of Foxp1 expression in the highly proliferative capacity CMs at the MI border zone. We then used both loss‐of‐function and gain‐of‐function mouse models to validate the beneficial effects of CM‐specific Foxp1 on metabolic reprogramming from FA oxidation to glycolysis, thereby enhancing cell proliferation and heart regeneration. Furthermore, we identified the HIF1α‐Hand1 transcription network as a Foxp1 direct downstream target in regulating CM proliferation. Collectively, our data provide a novel strategy to provoke post‐injury cardiac regeneration and repair (Figure , Supporting Information). The mammalian neonatal heart possesses a transient capacity for cardiac regeneration during the first week after birth. [ , , , ] Over the past decade, various strategies have been employed to enhance postnatal heart regeneration by manipulating growth factor signaling, transcription factors, cell cycle regulatory molecules, microRNAs, and inflammatory cytokines. More recently, it has become evident that the energy metabolism of CMs governs developmental changes and controls the endogenous cardiac regenerative capacity. We demonstrated that Foxp1 deletion in CMs enhanced cardiac regenerative capacity in both neonatal AR and adult MI models. We also identified Usp20, a deubiquitinase that prevents HIF1α degradation, as a Foxp1 direct target gene. The CM‐specific deletion of Foxp1 elevates HIF1ɑ expression by inhibiting its degradation. Since HIF1α is the primary regulator of cellular energy metabolism under hypoxic conditions, its regulation by Foxp1 suggests a possible metabolic control mechanism of CM proliferation during heart generation. The endogenous cardiac regenerative capacity in vertebrates has been reported to be under metabolic control. Metabolic reprogramming of adult CMs from glycolysis to oxidative phosphorylation is sufficient to promote cell cycle arrest of immature CMs. Conversely, hypoxia and metabolic reprogramming of adult CMs to a glycolytic state could facilitate cell cycle reentry, stimulate CM proliferation for heart regeneration, and improve post‐injury recovery. [ , , ] As described earlier, Foxp1 regulates its downstream gene HIF1α, and deletion of Foxp1 significantly elevates HIF1α expression in CMs. This elevation might mimic hypoxia and promote CM proliferation by reprogramming postnatal CMs to a glycolytic state. Moreover, CM‐specific deletion of HIF1α reversed the Foxp1 deletion‐mediated increase in CM proliferation in the neonatal AR model. These data confirm the importance of the Foxp1‐HIF1α axis in CMs during postnatal heart regeneration and improving recovery from cardiac dysfunction. Inducing Hif‐1 activity in adult mice after MI leads to a robust regenerative response via the induction of proliferation of existing CMs supporting a link between CM metabolism and cell cycle activity. The transcription factor Hand1 was reported as a hypoxia‐dependent expression gene involved in the promotion of fetal CM glycolysis and in the inhibition of a considerable number of genes involved in the lipid metabolism of postnatal CMs. Our study demonstrated the regulation of Foxp1 in the metabolic transition between glycolysis and FA lipid metabolism dependent via Hand1 expression. Finally, CM‐targeted delivery of Hand1 using a cTnT‐promoter‐driven AAV9 to induce the Hand1 expression in CMs of Foxp1 CMTg mice triggered a significant attenuation of Foxp1 gain‐of‐function‐mediated impaired CM proliferation and heart regeneration. Our data demonstrate that the Foxp1‐HIF1α‐Hand1 signal cascade plays a crucial role in CMs for metabolic transition, cell proliferation, and heart regeneration. This suggests that Hand1 could serve as a novel molecular target for gene therapy in the treatment of heart failure. Several recent studies have indicated that long‐term induction of cardiomyocyte proliferation may have detrimental effects. We found that postnatal deletion of Foxp1 in CMs did not significantly affect the morphology or function of neonatal hearts but enhanced cardiac regeneration after apical resection injury. In adult mice, we assessed cardiac recovery up to 8 weeks after MI and observed improved survival, cardiac morphology, and function in mice with CM‐specific Foxp1 deletion. Similarly, induction of Hand1 expression via AAV9‐Hand1 up to 4 weeks post‐injury resulted in improved cardiac morphology and function. Importantly, we did not observe any adverse side effects such as cardiac dysfunction, sudden arrhythmic death, or cardiomyocyte overgrowth and hypertrophy. These results highlight the therapeutic potential of modulating the Foxp1‐HIF1α‐Hand1 pathway for cardiac regeneration. However, gene therapy approaches aimed at stimulating endogenous cardiomyocyte proliferation must address concerns such as tumorigenic risks and dedifferentiation of cardiomyocytes, which can lead to aberrant gene activation and impaired contractile function. Therefore, striking a balance between regeneration and preservation of cardiac function is critical in developing effective strategies for cardiac repair, and tight control over the dosage and duration of these therapies is essential. Recent advancements in gene therapy technologies provide promising avenues for safe and efficient cardiac regeneration. For instance, Magadum et al., developed a cardiomyocyte‐specific modified RNA (modRNA) expression system to induce Pkm2 gene expression for cardiac regeneration and repair. This technology is gaining attention due to its high efficiency, transient nature (lasting 8∼12 days), dose‐dependent, and controlled gene delivery. Additionally, Sun et al. applied this modRNA technique for cardiac regeneration in pig MI models. Our study contributes to this growing field by identifying Foxp1 and Hand1 as potential molecular targets for post‐injury cardiac regeneration strategies. Combining the modulation of these targets with new techniques could enhance the efficacy and safety of gene therapies aimed at promoting cardiac repair. Conclusions In conclusion, our study identifies the transcription factor Foxp1, a key regulator of embryonic cardiovascular development, as a critical endogenous factor governing post‐injury cardiomyocyte proliferation. Mechanistically, we found that cardiomyocyte‐specific loss of Foxp1 promotes the USP20‐HIF1α‐Hand1 signaling pathway, promoting a metabolic shift from fatty acid oxidation to glycolysis and ultimately enhancing cardiomyocyte proliferation and heart regeneration. Importantly, our findings suggest that Hand1, a downstream target in this pathway, could serve as a promising molecular target for gene therapy in the treatment of heart failure. Overall, these results may guide the development of novel molecular strategies that promote heart regeneration and repair for therapeutic intervention in heart failure. Experimental Section Animals All animal procedures were performed in accordance with the Institutional Animal Care and Use of Laboratory Animals approved by the Tongji University Institutional Animal Care and Use Committee. The conditional Foxp1 loss‐of‐function (Foxp1 flox/flox , a kind gift from Prof. Edward E. Morrisey) and gain‐of‐function (Foxp1 Tg/+ ) mice were crossed with tamoxifen‐inducible ɑ‐MHC promoter‐driven Cre line (ɑ‐MHC MerCreMer , #005657, Jackson Laboratory) to generate CM‐specific Foxp1 loss‐ and gain‐of‐function mice (Foxp1 CMKO and Foxp1 CMTg , respectively). Similarly, HIF1ɑ conditional loss‐of‐function mice (HIF1ɑ flox/flox , NM‐CKO‐190065, Shanghai Model Organisms Center), were crossed with Foxp1 CMKO mice to generate cardiomyocytes Foxp1 and HIF1ɑ double knockout mice. All strains were crossed on a C57BL/6J background (purchased from Shanghai Slac Laboratory Animal Co. Ltd). Neonatal apical resection (AR) and adult myocardial infarction (MI) mouse models were performed with detailed descriptions in the Experimental Section of Supporting Information. Details of adeno‐associated virus 9 (AAV9) injection for CM‐specific Hand1 overexpression in vivo to rescue Foxp1 CMTg mediated impaired cardiac regeneration are provided in Supporting Information. Datamining Analysis, Molecular Methods, and Reagents The details of data mining analysis of neonatal heart regeneration database, RT‐qPCR, immunoblotting, hematoxylin and eosin (H&E) staining, Masson's trichrome staining, Triphenyl tetrazolium chloride (TTC) staining and immunostaining, expression vectors, cell culture/transfection and proliferation/migration assay, in vitro siRNA transfection, ChIP assay, luciferase reporter assay, non‐targeted metabolomic study were provided in Supporting Information. Statistical Analysis Statistical analysis was performed using SPSS software version 25.0 and GraphPad Prism 9. Kaplan‐Meier survival curves were used to examine mouse survival rates, and the differences were analyzed with the log‐rank (Mantel‐Cox) test. Parametric data are presented as means ± S.E.M. for at least three independent assays unless otherwise noted. All data passed the normality and equal variance before analysis. Student's t‐test was used for two‐sample comparisons, one‐way ANOVA with Tukey post‐hoc tests for comparisons between multiple groups, and two‐way ANOVA for comparisons between multiple groups when there were two experimental factors. p ‐Values of <0.05 were considered as statistical significance. Ethical Statement All animal procedures were performed in accordance with the Institutional Animal Care and Use of Laboratory Animals approved by the Tongji University Institutional Animal Care and Use Committee. The Institutional Review Board (IRB) of this study was obtained from Shanghai East Hospital and Tongji University (TJBB00222101). All animal procedures were performed in accordance with the Institutional Animal Care and Use of Laboratory Animals approved by the Tongji University Institutional Animal Care and Use Committee. The conditional Foxp1 loss‐of‐function (Foxp1 flox/flox , a kind gift from Prof. Edward E. Morrisey) and gain‐of‐function (Foxp1 Tg/+ ) mice were crossed with tamoxifen‐inducible ɑ‐MHC promoter‐driven Cre line (ɑ‐MHC MerCreMer , #005657, Jackson Laboratory) to generate CM‐specific Foxp1 loss‐ and gain‐of‐function mice (Foxp1 CMKO and Foxp1 CMTg , respectively). Similarly, HIF1ɑ conditional loss‐of‐function mice (HIF1ɑ flox/flox , NM‐CKO‐190065, Shanghai Model Organisms Center), were crossed with Foxp1 CMKO mice to generate cardiomyocytes Foxp1 and HIF1ɑ double knockout mice. All strains were crossed on a C57BL/6J background (purchased from Shanghai Slac Laboratory Animal Co. Ltd). Neonatal apical resection (AR) and adult myocardial infarction (MI) mouse models were performed with detailed descriptions in the Experimental Section of Supporting Information. Details of adeno‐associated virus 9 (AAV9) injection for CM‐specific Hand1 overexpression in vivo to rescue Foxp1 CMTg mediated impaired cardiac regeneration are provided in Supporting Information. The details of data mining analysis of neonatal heart regeneration database, RT‐qPCR, immunoblotting, hematoxylin and eosin (H&E) staining, Masson's trichrome staining, Triphenyl tetrazolium chloride (TTC) staining and immunostaining, expression vectors, cell culture/transfection and proliferation/migration assay, in vitro siRNA transfection, ChIP assay, luciferase reporter assay, non‐targeted metabolomic study were provided in Supporting Information. Statistical analysis was performed using SPSS software version 25.0 and GraphPad Prism 9. Kaplan‐Meier survival curves were used to examine mouse survival rates, and the differences were analyzed with the log‐rank (Mantel‐Cox) test. Parametric data are presented as means ± S.E.M. for at least three independent assays unless otherwise noted. All data passed the normality and equal variance before analysis. Student's t‐test was used for two‐sample comparisons, one‐way ANOVA with Tukey post‐hoc tests for comparisons between multiple groups, and two‐way ANOVA for comparisons between multiple groups when there were two experimental factors. p ‐Values of <0.05 were considered as statistical significance. All animal procedures were performed in accordance with the Institutional Animal Care and Use of Laboratory Animals approved by the Tongji University Institutional Animal Care and Use Committee. The Institutional Review Board (IRB) of this study was obtained from Shanghai East Hospital and Tongji University (TJBB00222101). The authors declare no conflict of interest. Y. F. W., X. Y. W., J. F., and X. L. C. contributed equally to this work. J. L. and Y. Z. Z. conceived and designed the study; Y. F. W., X. Y. W., J. F., X. L. C., T. Z., S. P., W. Z. B., W. R. W., and Y. S. L. performed experiments and acquired data; T. X. and W. R. W. analyzed the single‐cell database; H. K. W. provided the Foxp1 gain‐of‐function mice; B. T., P. C., sp S. G. Z., Q. Z., and L. Z. gave suggestions and revised the manuscript; Z. M. L. and J. J. P. contributed grant support and gave suggestions for study design and manuscript revision. All authors contributed to drafting and revising the article and approved the submitted version. Supporting Information
Oro‐Dental Characteristics in Patients With Adult‐Onset Hypophosphatasia Compared to a Healthy Control Group–A Case‐Control Study
a866b2b1-0e22-483f-b2eb-353dd123ee28
11680502
Dentistry[mh]
Background Hypophosphatasia (HPP) is a rare inherited metabolic disease that can affect oral and dental health . The disease can be autosomal dominantly or autosomal recessively inherited and is caused by pathogenic variant(s) in the ALPL gene . This gene encodes for the tissue nonspecific alkaline phosphatase (TNSALP), which is important for bone and tooth mineralisation . In patients with HPP, TNSALP activity is persistently reduced, leading to pathological mineralisation of hard tissue and elevation of TNSALP substrates . HPP is a heterogeneous disease with a wide range of severity and various clinical manifestations . Clinical symptoms can include recurrent fractures, dental problems, reduced physical activity and chronic pain in muscles and bones . Previous research has suggested diagnostic criteria for HPP in adults and children . Patients diagnosed in adulthood (aHPP) are often mildly affected and heterozygous for a pathogenic variant in ALPL , while patients diagnosed in childhood (paediatric onset) typically show more severe symptoms and often are compound heterozygous for pathogenic variants in ALPL . Genetic studies indicate that the prevalence of the mild forms of HPP often diagnosed in adults is up to 1:508, which indicates that the disease is likely unrecognised and underdiagnosed . Various oral and dental manifestations have previously been reported in patients with HPP including premature loss of primary teeth, premature loss of permanent teeth and marginal bone loss . The premature loss of teeth has for many years been attributed to disturbed cementum formation . In addition, ankylosis, tooth agenesis, late eruption of primary and permanent dentition and impaction have also been found in such patients. Furthermore, small and bulbous crowns, cervical constriction, enamel and dentine hypoplasia, enamel hypomineralisation, increased occurrence of dental caries and increased tooth wear and fractures have been reported in patients with HPP. Even research without specific orthodontic focus have described underdevelopment of the alveolar process and malocclusions such as open bite, crowding and anterior or posterior crossbite in such patients . We hypothesise that patients with aHPP have altered oro‐dental characteristics due to pathological mineralisation of hard tissue, including higher presence of altered tooth and root morphology, opacities, tooth wear and fractures as well as increased bone loss. Even though the relation between HPP and affection of hard tissue as teeth has been established decades ago , oral and dental characteristics in patients with aHPP have so far only been described in case reports, case series and family studies, primarily focusing on the primary dentition . No previous studies have thus described oral and dental characteristics in patients with aHPP systematically . The aim of the present study is to investigate oral and dental characteristics in patients with aHPP compared to a group of healthy controls. Materials and Methods 2.1 Study Design and Participants This case–control study was performed at the Resource Centre for Rare Oral Diseases, Copenhagen University Hospital, Rigshospitalet, Copenhagen, between September 2022 and June 2023. The present study was conducted according to a standardised protocol validated in previous studies in patients with Ehlers‐Danlos syndrome . Power calculation was performed prior to the study based on a previous study which reported that 8% of healthy adults have dental enamel defects . If 50% of patients with HPP have dental enamel defects with a risk of type 1 error of 5% and type 2 error of 20% and an 80% chance of detecting a difference between the two groups, the power calculation shows that approximately 18 participants are needed in each group to detect a significant difference. A total of 46 patients with HPP were recruited from a previous study, which was conducted between September 2017 and February 2020 . All adults with HPP were diagnosed due to biochemical and clinical features of HPP . In addition, the diagnosis HPP was verified by genetic testing. None of the participants were diagnosed alone due to dental signs. Biochemical, clinical as well as genetic characteristics of this cohort are described by Hepp et al. . Patients with aHPP and an age between 18 and 80 years were included in the present study. Pregnant patients with aHPP were excluded. The healthy controls were recruited by advertisement in dental clinics, the Department of Odontology and online. The inclusion criteria for healthy controls were no known diseases or syndromes, age ranging from 18 to 80 years, at least 24 teeth, neutral occlusion and no previous orthodontic treatment. Healthy controls were excluded if they had presence of multiple degraded, untreated clinical crowns (caries or fracture), sleep disorders, pregnancy as well as familial predisposition to HPP or rickets (Figure ). All participants were interviewed and examined by one examiner (FJ) under supervision of XH and LS using standard and validated methods . The study was approved by the Danish National Committee on Health Research Ethics (Protocol H‐22008426) and the Danish Data Protection Agency (514‐0739/22‐3000). All participants provided informed consent to participate in the study. Furthermore, the study protocol was established in accordance with the guidelines of the Declaration of Helsinki. 2.2 Interview and Clinical Examination The interview included questions about tendency to spontaneous fractures of the teeth, dental caries, agenesis and early loss of primary and permanent teeth. Each question had answering alternatives of ‘yes’, ‘no’ or ‘don't know’. The clinical examination comprised an assessment of the dentition, caries experience, oral hygiene and oral mucosa . The following was registered: teeth present , tooth fractures, attrition with dentin exposure and enamel hypoplasia or opacities according to the developmental defects of enamel (DDE) index , mucosal bruising or ulceration and the presence of lingual and inferior labial frenulum and oral hygiene index (OHI) . A tooth was registered as present and scored ‘1’ when a part of the tooth had penetrated the mucosa and registered as not present and scored as ‘0’ when the tooth was either not erupted, extracted or more than two‐thirds of the tooth surface was completely decayed or fractured . The presence of a crown fracture, attrition with dentin exposure, enamel hypoplasia (pits, grooves or areas) or enamel opacity (white, yellow or brown) on the erupted portion of the tooth and the presence of mucosal bruising or ulceration was scored as present ‘1’ or not present ‘0’ . Further, the presence of lingual and inferior labial frenulum was scored as present ‘0’ or not present ‘1’ . The OHI was calculated as the total debris score divided by the number of surfaces scored. No plaque was scored as ‘0’, plaque covering < 1/3 of the tooth surface was scored as ‘1’, plaque covering between > 1/3 and < 2/3 of the tooth surface was scored as ‘2’ and plaque covering > 2/3 of the tooth surface was scored as ‘3’ . 2.3 Radiographic Examination The radiographic examination included a panoramic radiograph (OP) and a cone‐beam computed tomography (CBCT) scan, both recorded at the Cephalometric laboratory, Department of Odontology, Copenhagen University by the same radiologist. The OPs were obtained in a ProMax 2D (Panoramic Xray Unit, Planmeca Oy, Helsinki, Finland) and used to record the number of teeth, tooth agenesis, supernumerary teeth, impacted teeth, decayed, missing and filled teeth (DMFT value) and root‐filled teeth. In addition, the presence of deviation in enamel radiolucency, deviation in crown morphology, taurodontism, gracile roots, deviation in root morphology, pulp stones/denticles and pulp obliterations were registered. The presence was scored as ‘1’, and the absence was scored as ‘0’ . The CBCT scans were obtained in a ProMax 3D Max (Planmeca Oy, Helsinki, Finland, serial number 509S05‐0703) with the following settings: 96 kV, 5 mA, exposure time of 9.020–9.113 s, image size of 575 × 575 × 433 and voxel size of 400 μm. The results were saved as Digital Imaging and Communication in Medicine (DICOM) format and imported to the Planmeca Romexis Viewer (5.3.5.80) computer programme, which was used to create 3D images in the Explorer 3D sub‐module, where further analysis was performed. The 3D image registration was performed to achieve optimal visualisation of the selected registration of teeth in the coronal, sagittal and axial views . The CBCT scans were used to evaluate the marginal bone level, crown height and root length on the first molars and central incisors in both jaws (Figure ). The marginal bone level was measured as the distance between the cement–enamel junction (CEJ) and the most apical part of the bone level at the mesial, distal, lingual/palatinal and buccal aspect of the tooth in the coronal and sagittal view. The crown height was defined as the distance from the incisal edge of the incisors or the buccal cusp tip of the molars perpendicular to the line connecting the most mesial and distal CEJ in the coronal view. The root length was measured as the distance between the most apical point of the root perpendicular to the line connecting the most mesial and distal CEJ in the coronal view. The crown/root ratio was subsequently calculated by dividing the crown height by the root length . Regarding measurement of the marginal bone level, a small value indicates that there was no bone loss, whereas a larger value indicates that bone loss had occurred. All parameters were measured in millimetres. 2.4 Reliability Following the calibration of XH and FJ using 15 randomly selected OPs and CBCTs, the inter‐ and intra‐observer agreement was assessed on 25 randomly selected OPs and CBCTs. XH and FJ performed the inter‐observer registrations, while FJ conducted the intra‐observer registrations by repeating the measurements and registrations at a 1‐month interval. Regarding the registrations on the OPs, no systematic error was found and the inter‐ and intra‐observer agreement for the registrations on the OPs was κ = 0.77–1 and κ = 0.88–1, respectively. For the measurements on CBCT, no systematic error was found and the method error according to Dahlberg formula was 0.09–0.71 mm and the Houston reliability coefficient was 0.66–0.99. The clinical recordings including presence of teeth, crown fracture, attrition with dentin exposure, enamel hypoplasia and enamel opacity were re‐assessed on clinical photos by two of the authors. If any doubt, the tooth was registered as no deviation. 2.5 Statistical Analysis The statistical analyses were performed in SPSS (IBM, version 28.0), and the level of significance was set to 5%. The categorical data was analysed using Fisher's exact test for 2 × 2 tables and Fisher–Freeman–Halton Exact test for tables larger than 2 × 2. Subsequently, multiple logistic regression was performed for all the statistically significant categorical variables to adjust for age and gender. For statistically significant variables where ‘0’ was included in the dataset, multiple logistic regression was not performed. The normality of the continuous data was determined by assessing Q‐Q plots and Shapiro–Wilk test. Subsequently, data was analysed using t‐test on the normally distributed data and Wilcoxon rank sum test on non‐normally distributed data. Multiple linear regression was then performed on the statistically significant continuous variables to adjust for age and gender. In addition, dentition, DMFT and marginal bone level were also adjusted for OHI using backwards elimination . Study Design and Participants This case–control study was performed at the Resource Centre for Rare Oral Diseases, Copenhagen University Hospital, Rigshospitalet, Copenhagen, between September 2022 and June 2023. The present study was conducted according to a standardised protocol validated in previous studies in patients with Ehlers‐Danlos syndrome . Power calculation was performed prior to the study based on a previous study which reported that 8% of healthy adults have dental enamel defects . If 50% of patients with HPP have dental enamel defects with a risk of type 1 error of 5% and type 2 error of 20% and an 80% chance of detecting a difference between the two groups, the power calculation shows that approximately 18 participants are needed in each group to detect a significant difference. A total of 46 patients with HPP were recruited from a previous study, which was conducted between September 2017 and February 2020 . All adults with HPP were diagnosed due to biochemical and clinical features of HPP . In addition, the diagnosis HPP was verified by genetic testing. None of the participants were diagnosed alone due to dental signs. Biochemical, clinical as well as genetic characteristics of this cohort are described by Hepp et al. . Patients with aHPP and an age between 18 and 80 years were included in the present study. Pregnant patients with aHPP were excluded. The healthy controls were recruited by advertisement in dental clinics, the Department of Odontology and online. The inclusion criteria for healthy controls were no known diseases or syndromes, age ranging from 18 to 80 years, at least 24 teeth, neutral occlusion and no previous orthodontic treatment. Healthy controls were excluded if they had presence of multiple degraded, untreated clinical crowns (caries or fracture), sleep disorders, pregnancy as well as familial predisposition to HPP or rickets (Figure ). All participants were interviewed and examined by one examiner (FJ) under supervision of XH and LS using standard and validated methods . The study was approved by the Danish National Committee on Health Research Ethics (Protocol H‐22008426) and the Danish Data Protection Agency (514‐0739/22‐3000). All participants provided informed consent to participate in the study. Furthermore, the study protocol was established in accordance with the guidelines of the Declaration of Helsinki. Interview and Clinical Examination The interview included questions about tendency to spontaneous fractures of the teeth, dental caries, agenesis and early loss of primary and permanent teeth. Each question had answering alternatives of ‘yes’, ‘no’ or ‘don't know’. The clinical examination comprised an assessment of the dentition, caries experience, oral hygiene and oral mucosa . The following was registered: teeth present , tooth fractures, attrition with dentin exposure and enamel hypoplasia or opacities according to the developmental defects of enamel (DDE) index , mucosal bruising or ulceration and the presence of lingual and inferior labial frenulum and oral hygiene index (OHI) . A tooth was registered as present and scored ‘1’ when a part of the tooth had penetrated the mucosa and registered as not present and scored as ‘0’ when the tooth was either not erupted, extracted or more than two‐thirds of the tooth surface was completely decayed or fractured . The presence of a crown fracture, attrition with dentin exposure, enamel hypoplasia (pits, grooves or areas) or enamel opacity (white, yellow or brown) on the erupted portion of the tooth and the presence of mucosal bruising or ulceration was scored as present ‘1’ or not present ‘0’ . Further, the presence of lingual and inferior labial frenulum was scored as present ‘0’ or not present ‘1’ . The OHI was calculated as the total debris score divided by the number of surfaces scored. No plaque was scored as ‘0’, plaque covering < 1/3 of the tooth surface was scored as ‘1’, plaque covering between > 1/3 and < 2/3 of the tooth surface was scored as ‘2’ and plaque covering > 2/3 of the tooth surface was scored as ‘3’ . Radiographic Examination The radiographic examination included a panoramic radiograph (OP) and a cone‐beam computed tomography (CBCT) scan, both recorded at the Cephalometric laboratory, Department of Odontology, Copenhagen University by the same radiologist. The OPs were obtained in a ProMax 2D (Panoramic Xray Unit, Planmeca Oy, Helsinki, Finland) and used to record the number of teeth, tooth agenesis, supernumerary teeth, impacted teeth, decayed, missing and filled teeth (DMFT value) and root‐filled teeth. In addition, the presence of deviation in enamel radiolucency, deviation in crown morphology, taurodontism, gracile roots, deviation in root morphology, pulp stones/denticles and pulp obliterations were registered. The presence was scored as ‘1’, and the absence was scored as ‘0’ . The CBCT scans were obtained in a ProMax 3D Max (Planmeca Oy, Helsinki, Finland, serial number 509S05‐0703) with the following settings: 96 kV, 5 mA, exposure time of 9.020–9.113 s, image size of 575 × 575 × 433 and voxel size of 400 μm. The results were saved as Digital Imaging and Communication in Medicine (DICOM) format and imported to the Planmeca Romexis Viewer (5.3.5.80) computer programme, which was used to create 3D images in the Explorer 3D sub‐module, where further analysis was performed. The 3D image registration was performed to achieve optimal visualisation of the selected registration of teeth in the coronal, sagittal and axial views . The CBCT scans were used to evaluate the marginal bone level, crown height and root length on the first molars and central incisors in both jaws (Figure ). The marginal bone level was measured as the distance between the cement–enamel junction (CEJ) and the most apical part of the bone level at the mesial, distal, lingual/palatinal and buccal aspect of the tooth in the coronal and sagittal view. The crown height was defined as the distance from the incisal edge of the incisors or the buccal cusp tip of the molars perpendicular to the line connecting the most mesial and distal CEJ in the coronal view. The root length was measured as the distance between the most apical point of the root perpendicular to the line connecting the most mesial and distal CEJ in the coronal view. The crown/root ratio was subsequently calculated by dividing the crown height by the root length . Regarding measurement of the marginal bone level, a small value indicates that there was no bone loss, whereas a larger value indicates that bone loss had occurred. All parameters were measured in millimetres. Reliability Following the calibration of XH and FJ using 15 randomly selected OPs and CBCTs, the inter‐ and intra‐observer agreement was assessed on 25 randomly selected OPs and CBCTs. XH and FJ performed the inter‐observer registrations, while FJ conducted the intra‐observer registrations by repeating the measurements and registrations at a 1‐month interval. Regarding the registrations on the OPs, no systematic error was found and the inter‐ and intra‐observer agreement for the registrations on the OPs was κ = 0.77–1 and κ = 0.88–1, respectively. For the measurements on CBCT, no systematic error was found and the method error according to Dahlberg formula was 0.09–0.71 mm and the Houston reliability coefficient was 0.66–0.99. The clinical recordings including presence of teeth, crown fracture, attrition with dentin exposure, enamel hypoplasia and enamel opacity were re‐assessed on clinical photos by two of the authors. If any doubt, the tooth was registered as no deviation. Statistical Analysis The statistical analyses were performed in SPSS (IBM, version 28.0), and the level of significance was set to 5%. The categorical data was analysed using Fisher's exact test for 2 × 2 tables and Fisher–Freeman–Halton Exact test for tables larger than 2 × 2. Subsequently, multiple logistic regression was performed for all the statistically significant categorical variables to adjust for age and gender. For statistically significant variables where ‘0’ was included in the dataset, multiple logistic regression was not performed. The normality of the continuous data was determined by assessing Q‐Q plots and Shapiro–Wilk test. Subsequently, data was analysed using t‐test on the normally distributed data and Wilcoxon rank sum test on non‐normally distributed data. Multiple linear regression was then performed on the statistically significant continuous variables to adjust for age and gender. In addition, dentition, DMFT and marginal bone level were also adjusted for OHI using backwards elimination . Results 3.1 Study Population A total of 51 participants, 20 patients with aHPP and 31 healthy controls, were included in the study (Figure ). Patients with aHPP (4 men and 16 women) had a mean age of 53.10 ± 12.45 years (age range: 24–74 years). The 31 healthy controls consisted of 4 men and 27 women with a mean age of 48.61 ± 13.30 years (age range: 22–71 years) (Figure ). No statistically significant differences in age and gender were found between the groups. Only significant results adjusted for age and gender are described in the results section. However, significant values without adjustment for age and gender are also reported in cases where multiple logistic regression was not possible. 3.2 Interview and Clinical Examination The results from the interviews revealed that a significantly higher number of patients with aHPP have experienced tooth fractures, caries in permanent teeth and early loss of permanent teeth compared to healthy controls. Results from the interviews are presented in Table . The clinical examination showed that the presence of 28, 48 and 46 and attrition of 11 were significantly lower in patients with aHPP compared to healthy controls (Table ). In addition, the presence of 14, 16, 24, 26 and 27 was significantly lower and the opacity of 31, 33, 43 and 44 was significantly higher in patients with aHPP than in healthy controls (Table ). No significant difference was found in OHI between patients with aHPP and healthy controls. 3.3 OP Examination Results from OP examination are shown in Tables and . Patients with aHPP had significantly lower presence of 28, 38, 46 and 48 and a lower number of teeth. Furthermore, the presence of denticles was significantly higher in patients with aHPP compared to healthy controls (Tables and ). In addition, the presence of 14, 16, 24, 26 and 27 was significantly lower in patients with aHPP than in healthy controls (Table ). 3.4 CBCT Examination The distance between CEJ and the marginal bone level buccal and palatinal for 11, mesial, distal, buccal and palatinal for 21, distal for 26 and mesial and buccal for 46 were significantly higher in patients with aHPP than in healthy controls (Table ). Moreover, patients with aHPP had significantly higher crown height for 11 than healthy controls (Table ). Study Population A total of 51 participants, 20 patients with aHPP and 31 healthy controls, were included in the study (Figure ). Patients with aHPP (4 men and 16 women) had a mean age of 53.10 ± 12.45 years (age range: 24–74 years). The 31 healthy controls consisted of 4 men and 27 women with a mean age of 48.61 ± 13.30 years (age range: 22–71 years) (Figure ). No statistically significant differences in age and gender were found between the groups. Only significant results adjusted for age and gender are described in the results section. However, significant values without adjustment for age and gender are also reported in cases where multiple logistic regression was not possible. Interview and Clinical Examination The results from the interviews revealed that a significantly higher number of patients with aHPP have experienced tooth fractures, caries in permanent teeth and early loss of permanent teeth compared to healthy controls. Results from the interviews are presented in Table . The clinical examination showed that the presence of 28, 48 and 46 and attrition of 11 were significantly lower in patients with aHPP compared to healthy controls (Table ). In addition, the presence of 14, 16, 24, 26 and 27 was significantly lower and the opacity of 31, 33, 43 and 44 was significantly higher in patients with aHPP than in healthy controls (Table ). No significant difference was found in OHI between patients with aHPP and healthy controls. OP Examination Results from OP examination are shown in Tables and . Patients with aHPP had significantly lower presence of 28, 38, 46 and 48 and a lower number of teeth. Furthermore, the presence of denticles was significantly higher in patients with aHPP compared to healthy controls (Tables and ). In addition, the presence of 14, 16, 24, 26 and 27 was significantly lower in patients with aHPP than in healthy controls (Table ). CBCT Examination The distance between CEJ and the marginal bone level buccal and palatinal for 11, mesial, distal, buccal and palatinal for 21, distal for 26 and mesial and buccal for 46 were significantly higher in patients with aHPP than in healthy controls (Table ). Moreover, patients with aHPP had significantly higher crown height for 11 than healthy controls (Table ). Discussion To our knowledge this is the first study investigating oro‐dental manifestations in patients with aHPP compared to a group of healthy controls. In the present study, patients with aHPP had a subjective experience of poorer dental health, which was not consistent with all of the objective findings. Objective findings in patients with aHPP included lower presence of specific teeth and lower prevalence of permanent teeth, higher opacity of a few teeth, higher presence of denticles and greater marginal bone loss at specific sites. Tooth fractures have previously been described in a single patient with HPP in a family study and have not previously been investigated systematically. In the present study, a significantly higher number of patients with aHPP reported experiencing dental fractures, which could not be confirmed in the clinical examination. The higher subjective experience of tooth fractures in patients with aHPP may be associated with a subjective feeling of having ‘fragile teeth’ due to reduced mineralisation of enamel and/or dentin . In addition, patients are possibly more aware of their dental health after receiving the diagnosis HPP, which may also lead to a subjective overinterpretation of dental problems. Previous studies have hypothesised an association between tooth fractures/crackled teeth and reduced mineralisation of enamel and/or dentin in patients with HPP , but in more severely affected HPP patients than in the present study. The disagreement between the studies may be explained by the milder phenotype seen in patients with aHPP compared to patients with paediatric‐onset HPP. Premature loss of particularly primary incisors has previously been described as a cardinal symptom of HPP 10 , which is in disagreement with the present study where loss of primary teeth did not differ between the groups. It may be because it is difficult to remember what happened during early childhood when you are an adult (20% answered ‘don't know’ in the interview) or because milder phenotypes among patients with aHPP (compared with patients with paediatric‐onset HPP) may have contributed to minor changes to the periodontium, which may be the cause for not having found a premature loss of primary teeth in aHPP patients at a higher level than controls in the present study. In comparison, a significantly higher number of patients with aHPP reported early loss of permanent teeth in the present study. In addition, data from the clinical examination showed that the presence of specific teeth and the prevalence of permanent teeth was significantly lower in patients with aHPP, which is in agreement with previous studies . It is hypothesised that the early tooth loss in patients with HPP may be caused by periodontal degradation or loss of alveolar bone . Furthermore a histological study of patients with HPP has showed a defect in the root cementum, which may cause that Sharpey's fibres of the periodontal ligament fail to connect to the tooth root resulting in tooth loss . In the present study, the subjective experience of caries in permanent teeth was significantly higher in patients with aHPP, but this was not verified by the DMFT score, as no significant difference was found between the groups. The results indicate that patients with aHPP had a greater subjective sense of their caries activity compared with the objective caries activity. However, this study did not include a clinical examination of caries activity, and OP and CBCT were recorded instead of bitewing X‐rays, thus the conditions for diagnosing caries were not optimal . Caries could be a possible cause of missing teeth in patients with HPP, as caries has previously been described in patients with HPP, although mainly in case reports and review articles . However, these studies have not included information of oral hygiene and plaque levels, which is essential since caries is dependent on the presence of bacterial flora (plaque) . In the present study, DMFT was adjusted for OHI. In the present study, there was no significant difference in OHI between patients with aHPP and healthy controls. High OHI is usually associated with an increased risk of periodontitis and caries . The differences in the number of teeth present and dental disease between the groups may therefore not be explained by differences in OHI. In addition, patients with aHPP generally reported frequent dental visits due to knowledge of an increased risk of periodontal disease in HPP. This may also explain the good oral hygiene among patients with aHPP that was observed in the present study. Interestingly, patients with aHPP had significantly less attrition of 11 in the present study. It was expected that patients with HPP would be more susceptible to wear due to empirical evidence and reduced activity of TNALP in HPP, which may lead to less mineralised enamel and dentin. In the present study, questions regarding tooth grinding were not included. Thus, less attrition of 11 may be caused by a potential difference in the amount and pattern of grinding between the groups but that may not explain why less attrition only appeared on a single tooth. On the other hand, the significantly higher prevalence of opacities on a few teeth in patients with aHPP in the present study suggest that there is a disturbance in the mineralisation of the dental hard tissues due to reduced activity of TNSALP in HPP . Thus, further investigation of the dental hard tissues is needed since the opacities could be related to other causes. To our knowledge, marginal bone level, crown height and root length have not previously been assessed on CBCTs in patients with HPP. The marginal bone loss was significantly greater at specific sites in patients with aHPP than in healthy controls. As the result was adjusted for OHI and there was no significant difference in OHI between the aHPP group and healthy controls, the marginal bone loss could not be explained by poor oral hygiene in the aHPP patients in the present study. Greater bone loss increases the risk of tooth loss and may possibly explain the tooth loss reported in the questionnaire survey, the clinical examination and the examination in the OP. Histological studies on teeth from patients with HPP are needed to discover the histological explanation for the bone loss . Patients with aHPP had a significantly higher crown height of 11, which was unexpected. The higher crown height of 11 may perhaps be related to less attrition of 11. On the other hand, some patients with aHPP had difficulties standing still in the CBCT machine, which resulted in artefacts on the CBCT scans, making the accuracy of some of the measurements questionable. Furthermore, participants from both the aHPP and healthy control group had restored dentition, which resulted in artefacts on the CBCTs, which may complicate interpretation of the CBCT measurements. The control group was on average younger than the aHPP group with a difference of 4.49 years. This may have an impact on the results, as there is generally a correlation between higher age and greater bone loss and higher age and higher DMFT . In addition, there was a different gender distribution with 87.1% women in the control group and 80% women in the aHPP group. Based on these limitations, the results were adjusted for age and gender. No systemic diseases that could affect the results were found in the two groups. A clinical periodontal examination was not performed and therefore it cannot be excluded that the bone loss in the aHPP group was due to periodontal inflammation. The results of the present study may improve knowledge about dental and oral manifestations in patients with aHPP. Although HPP is classified as a rare disease , dentists may meet patients with HPP in their everyday clinical practice. Dentists may also be the first to observe symptoms of HPP in patients and refer the patient for examination by their own doctor or specialists . Therefore, the awareness and knowledge of oro‐dental manifestations in HPP are essential to improve diagnostics as well as to provide supportive dental care. Conclusion Patients with aHPP have a subjective experience of poor dental health, which was not always in accordance with the objective findings. Loss of permanent teeth, less attrition, tooth opacities, denticles and larger distance between CEJ and marginal bone level are possible oro‐dental findings in patients with aHPP. Future studies investigating the histological characteristics of teeth in patients with HPP are required to increase knowledge of the impact on the dental hard tissues in aHPP patients. The results of the present study contribute to a more detailed understanding of dental and oral manifestations in patients with aHPP and may thus prove valuable in dental care of aHPP patients and thereby delay tooth damage and tooth loss. In addition, the medical community will be able to utilise the new knowledge, as diagnostics of adults with the mild form of HPP is difficult and complex. For patients and dentists, it is also of great importance to be able to use the new knowledge to provide more preventive and prophylactic care. Furthermore, craniofacial and orthodontic findings can be of interest and additional studies on this matter may be published in the future. Freja Fribert Jørgensen has contributed to acquisition of data, analysis and interpretation of data; has been involved in drafting the manuscript and revising it critically for important intellectual content; has given final approval of the version to be published and agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Xenia Hermann has made substantial contributions to conception and design, acquisition of data and interpretation of data; has been involved in revising the manuscript critically for important intellectual content; has given final approval of the version to be published and agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Nicola Hepp has made substantial contributions to conception and design and acquisition of data; has been involved in revising the manuscript critically for important intellectual content; has given final approval of the version to be published and agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Liselotte Sonnesen has made substantial contributions to conception and design, acquisition of data and interpretation of data; has been involved in revising the manuscript critically for important intellectual content; has given final approval of the version to be published and agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. The authors declare no conflicts of interest.
Receptor Pharmacogenomics: Deciphering Genetic Influence on Drug Response
fe6b5732-c38a-4a3a-8ab4-32497c629940
11395000
Pharmacology[mh]
Over twenty years ago, the Human Genome Project, which provided the human genetic blueprint, was completed . The genetic makeup of humans is 99.9% identical, while 0.1% accounts for genetic diversity resulting in phenotypic differences between individuals . Each human being has a distinctive pattern of genetic variation that can impact disease susceptibility and drug response. The traditional medicine dogma is “one drug fits all” or “one dose fits all”, offering identical treatment to all patients that suffer from the same disease. However, the dosage can be individualized based on age or different comorbidities . Using this approach, a great heterogeneity can be observed in drug response, giving rise to adverse reactions or treatment failure. In recent years, a more patient-focused type of medicine has surfaced, namely precision medicine. Modern tools such as technology-based omics, artificial intelligence, and big data analytics provide the interface that can depict individual variability and treat patients based on their uniqueness . Pharmacogenetics and pharmacogenomics (PGx) are the pillars of precision medicine, attuning drug selection and dosage to each patient’s genetic profile . Nevertheless, PGx testing is not a routine procedure. Since 2020, the U.S. Food and Drug Administration (FDA) issued a PGx information table that currently contains over 100 drug–biomarker pairs . Moreover, for the last two decades, a powerful public database, PharmGKB (Pharmacogenomics Knowledgebase), has provided labeling information, clinical annotations, literature resources, and the pathways involved in gene–drug interactions . Launched in 2009, the Clinical Pharmacogenetic Implementation Consortium (CPIC) has established clinical practice guidelines for PGx data, occasionally contrasting FDA drug labeling . Thus, a global harmonized guideline needs to be implemented to translate PGx into clinical practice, increase the education of healthcare professionals, and offer patients personalized treatment. PGx investigates the genetic variability in drug target genes (membrane receptors, enzymes, and ion channels) also known as pharmacodynamics genes as well as pharmacokinetic genes, which are genes involved in drug metabolism . Despite significant efforts to gain a deeper understanding of all pharmacogenes, receptors PGx remains largely understudied. G protein-coupled receptors (GPCRs) and receptor tyrosine kinases (RTKs) are classical drug targets. GPCRs constitute the largest family of transmembrane proteins, transducing extracellular stimuli (hormones, neurotransmitters, and nutrients) into intracellular effects, thus controlling a plethora of bodily functions . Although they serve as the most successful druggable protein family, as 35% of drugs target them, no GPCR variant is included in the drug labeling information . Meanwhile, the genetic variation in a receptor tyrosine kinase, namely epidermal growth factor receptor (EGFR), whose overexpression was linked with cancer progression, is well defined in correlation with its inhibitors . The integration of personalized medicine into common clinical practice has long been proposed, but challenges are yet to be defeated . In this review, we will lay out the current status of PGx-informed pharmacotherapy regarding EGFR and emphasize the crucial significance of evaluating GPCR polymorphism in drug response. 2.1. An Overview on Pharmacogenes Polymorphism Pharmacogenomics (PGx) evaluates the influence of genetic variation on drug response. Although they are commonly used interchangeably, pharmacogenetics takes into account a single-gene effect, while PGx follows a more integrative approach by addressing the impact of the whole genome on drug treatment outcomes. Within the pharmacogenes, multiple types of genetic variations can arise due to single nucleotide polymorphisms (SNPs), indels (small insertions/deletions), inversions, duplications, or complex recombinations . As a result, protein structure or function can be altered. The functional consequences of these mutations can result in an enhanced or novel protein activity in the case of gain-of-function (GoF) variants or in a diminished or non-existent activity owing to loss-of-function (LoF) variants . Polymorphism in genes that encode drug-metabolizing enzymes or drug transport has a pharmacokinetic impact, while mutations in drug target genes (receptors, enzymes, and intracellular signaling proteins) result in an altered pharmacodynamics profile . The most widely studied pharmacogenes are involved in drug metabolism, especially the well-known CYP450 family members, which are highly polymorphic, such as the CYP2D6 variants that have been thoroughly characterized . On the contrary, pharmacodynamics drug–gene interactions are less understood . Assessment of genetic variants that impact drug response relies on two approaches: the candidate gene approach (CGAS) or genome-wide association studies (GWAS). Both techniques explore the link between a phenotype (e.g., an adverse reaction or drug efficacy) and a genotype, although their study design is different. The first one requires pre-specified genes known to be involved in drug pharmacology (drug metabolism or drug action). For example, warfarin dosing depends on the genetic polymorphism of two candidate genes: the metabolizing enzyme, CYP2C9 , and the drug target, VKORC1 . Since CGAS is a hypothesis-driven approach, it is more prone to bias if the wrong genes are chosen. Moreover, it is a self-limiting method because other important genes can be missed . Meanwhile, GWAS employs a genome-wide screening strategy without the need for an a priori hypothesis. For instance, GWAS revealed that SLCO1B1 polymorphism predicts the risk of statin-induced myopathy, leading to improved adverse reaction management for patients who have been prescribed statins . GWAS can identify novel mutated genes responsible for the observed phenotype, making it a useful approach for common variants, while CGAS is a method for studying rare variants . Comparisons between these genetic associations are reviewed here , favoring GWAS. It should be noted that GWAS have their own limitations: (a) they can identify many genetic variants and can be cumbersome to determine the causal genes; (b) large sample sizes are mandatory for achieving statistical significance; and (c) population stratification can generate a false association . Published GWAS are usually focused on understanding diseases rather than drug response. Hence, further PGx GWAS are vital for discovering relevant therapeutic biomarkers and the pathways involved in drug action . 2.2. Clinical Implementation of Pharmacogenomics Implementation of PGx in clinical practice is a slow process, regardless of its proven role in drug therapy. Few hospitals or clinics from all over the world are applying the PGx tools into their practice, performing routine genetic testing for their patients . The resistance of healthcare professionals against genotype-specific therapy arises from multiple reasons: (1) an apparent insufficient clinical utility; (2) scarcity of cost-effectiveness studies; (3) lack of standardized genotyping tests for each drug–gene pair; (4) difficulty in interpreting PGx tests; and (5) absence of a specific guideline or a step-by-step procedure for patients with a mutated pharmacogene . The gold standard of evidence-based medicine is represented by randomized controlled trials (RCTs). In traditional medicine, prescribing decisions regarding commonly used drugs can rely on disease-oriented evidence implemented by case series rather than RCTs . In addition, RCTs are costly, patient-specific factors (age, polypharmacy, and comorbidities) may not be taken into account, and rare variants can be overlooked . Thus, in the case of PGx-guided therapy, confirmation of clinical utility cannot rely only on RCTs; however, alternative forms of evidence (retrospective studies and meta-analysis) may be sufficient if they are properly designed and analyzed. The overall cost for PGx-based treatment has decreased in the past few years due to technological advances, e.g., from single gene testing to a multigene panel . PGx tests are not yet reimbursed due to a lack of widespread cost-effectiveness studies, although recent data demonstrate a positive economic outcome when PGx testing is performed . Lastly, PGx implementation is stalled as a consequence of an absent harmonized guideline. A global network of experts (researchers and clinicians) and patients would help propel PGx research through translation into clinical practice . Since 2005, the FDA has recommended the use of PGx data in drug development for new drug applications, reinforcing the inclusion of PGx information in drug labeling . Labeling sections can include indications and usage, dosage and administration, adverse reactions, clinical pharmacology, clinical studies, warnings, and precautions, and even boxed warnings can be used to display PGx biomarkers . Moreover, since 2020, the FDA has issued a Table of Pharmacogenetic Associations categorizing different gene–drug pairs with an emphasis on drug-metabolizing enzymes, drug transporters, and gene variants that can produce certain adverse reactions . Likewise, the European Medicines Agency published its first PGx guideline in 2012 covering the impact of pharmacokinetic modifications due to genetic variation in key proteins . Currently, there are several organizations whose purpose is to formulate recommendations to facilitate PGx application in clinical practice, such as the Clinical Pharmacogenetics Implementation Consortium (CPIC), the Dutch Pharmacogenetics Working Group (DPWG), the Canadian Pharmacogenomics Network for Drug Safety (CPNDS), and the French National Network (Réseau) of Pharmacogenetics (RNPGx) . Each assembly has a distinctive profile and a different approach. Until now, CPIC emitted 26 guidelines providing guidance only for patients that already have PGx data, unlike DPWG, CPNDS, and RNPGx, which advise performing genotype testing in daily clinical practice, with RNPGx offering specific clinical characteristics for when it is suitable to test . Considering this discrepancy, an important question is being raised: when and how to test? Ideally, genotype testing would be performed before prescribing, providing a patient-oriented, personalized guide to drug selection and dosage, reducing adverse reactions, and avoiding ineffective treatments . This approach is called pre-emptive PGx and shows great potential for achieving genotype-based therapy . Usually, a panel with different variants associated with commonly used drugs is tested and the information obtained is recorded electronically, generating a digital PGx profile for each patient serving as a starting point for any following medical problems. However, PGx tests are largely unregulated, unstandardized, and difficult to interpret. Based on high-quality and consistent evidence, PGx tests need to have the same gene and allele content linked with recommendations regarding medication selection and dosing . Initiatives and efforts to harmonize PGx specialization worldwide have been made. The Association of Molecular Pathology (AMP), an international non-profit scientific society, provides information regarding the choice of alleles to be included for testing. The CPIC and DPWG are standardizing terms for PGx test results, in line with the Pharmacogene Variation Consortium (PharmVar), which also offers a universal PGx nomenclature. Pharmacogenomics Clinical Annotation Tool (PharmCAT) uses CPIC variants and assigns the corresponding allele for PGx test results interpretation, generating a PGx report. PharmGKB is an important resource for healthcare providers that aids in spreading information about genetic variants and their implications in drug response . Moreover, globally, in the last 20 years, PGx education has been incorporated in medical and pharmacy schools, although some refinement is still needed (e.g., more hours of PGx education to be adopted, PGx as an independent pharmacy specialty, a more interdisciplinary perspective for better integrating PGx information, and creating E-learning programs for low-developed countries) . Pharmacogenomics (PGx) evaluates the influence of genetic variation on drug response. Although they are commonly used interchangeably, pharmacogenetics takes into account a single-gene effect, while PGx follows a more integrative approach by addressing the impact of the whole genome on drug treatment outcomes. Within the pharmacogenes, multiple types of genetic variations can arise due to single nucleotide polymorphisms (SNPs), indels (small insertions/deletions), inversions, duplications, or complex recombinations . As a result, protein structure or function can be altered. The functional consequences of these mutations can result in an enhanced or novel protein activity in the case of gain-of-function (GoF) variants or in a diminished or non-existent activity owing to loss-of-function (LoF) variants . Polymorphism in genes that encode drug-metabolizing enzymes or drug transport has a pharmacokinetic impact, while mutations in drug target genes (receptors, enzymes, and intracellular signaling proteins) result in an altered pharmacodynamics profile . The most widely studied pharmacogenes are involved in drug metabolism, especially the well-known CYP450 family members, which are highly polymorphic, such as the CYP2D6 variants that have been thoroughly characterized . On the contrary, pharmacodynamics drug–gene interactions are less understood . Assessment of genetic variants that impact drug response relies on two approaches: the candidate gene approach (CGAS) or genome-wide association studies (GWAS). Both techniques explore the link between a phenotype (e.g., an adverse reaction or drug efficacy) and a genotype, although their study design is different. The first one requires pre-specified genes known to be involved in drug pharmacology (drug metabolism or drug action). For example, warfarin dosing depends on the genetic polymorphism of two candidate genes: the metabolizing enzyme, CYP2C9 , and the drug target, VKORC1 . Since CGAS is a hypothesis-driven approach, it is more prone to bias if the wrong genes are chosen. Moreover, it is a self-limiting method because other important genes can be missed . Meanwhile, GWAS employs a genome-wide screening strategy without the need for an a priori hypothesis. For instance, GWAS revealed that SLCO1B1 polymorphism predicts the risk of statin-induced myopathy, leading to improved adverse reaction management for patients who have been prescribed statins . GWAS can identify novel mutated genes responsible for the observed phenotype, making it a useful approach for common variants, while CGAS is a method for studying rare variants . Comparisons between these genetic associations are reviewed here , favoring GWAS. It should be noted that GWAS have their own limitations: (a) they can identify many genetic variants and can be cumbersome to determine the causal genes; (b) large sample sizes are mandatory for achieving statistical significance; and (c) population stratification can generate a false association . Published GWAS are usually focused on understanding diseases rather than drug response. Hence, further PGx GWAS are vital for discovering relevant therapeutic biomarkers and the pathways involved in drug action . Implementation of PGx in clinical practice is a slow process, regardless of its proven role in drug therapy. Few hospitals or clinics from all over the world are applying the PGx tools into their practice, performing routine genetic testing for their patients . The resistance of healthcare professionals against genotype-specific therapy arises from multiple reasons: (1) an apparent insufficient clinical utility; (2) scarcity of cost-effectiveness studies; (3) lack of standardized genotyping tests for each drug–gene pair; (4) difficulty in interpreting PGx tests; and (5) absence of a specific guideline or a step-by-step procedure for patients with a mutated pharmacogene . The gold standard of evidence-based medicine is represented by randomized controlled trials (RCTs). In traditional medicine, prescribing decisions regarding commonly used drugs can rely on disease-oriented evidence implemented by case series rather than RCTs . In addition, RCTs are costly, patient-specific factors (age, polypharmacy, and comorbidities) may not be taken into account, and rare variants can be overlooked . Thus, in the case of PGx-guided therapy, confirmation of clinical utility cannot rely only on RCTs; however, alternative forms of evidence (retrospective studies and meta-analysis) may be sufficient if they are properly designed and analyzed. The overall cost for PGx-based treatment has decreased in the past few years due to technological advances, e.g., from single gene testing to a multigene panel . PGx tests are not yet reimbursed due to a lack of widespread cost-effectiveness studies, although recent data demonstrate a positive economic outcome when PGx testing is performed . Lastly, PGx implementation is stalled as a consequence of an absent harmonized guideline. A global network of experts (researchers and clinicians) and patients would help propel PGx research through translation into clinical practice . Since 2005, the FDA has recommended the use of PGx data in drug development for new drug applications, reinforcing the inclusion of PGx information in drug labeling . Labeling sections can include indications and usage, dosage and administration, adverse reactions, clinical pharmacology, clinical studies, warnings, and precautions, and even boxed warnings can be used to display PGx biomarkers . Moreover, since 2020, the FDA has issued a Table of Pharmacogenetic Associations categorizing different gene–drug pairs with an emphasis on drug-metabolizing enzymes, drug transporters, and gene variants that can produce certain adverse reactions . Likewise, the European Medicines Agency published its first PGx guideline in 2012 covering the impact of pharmacokinetic modifications due to genetic variation in key proteins . Currently, there are several organizations whose purpose is to formulate recommendations to facilitate PGx application in clinical practice, such as the Clinical Pharmacogenetics Implementation Consortium (CPIC), the Dutch Pharmacogenetics Working Group (DPWG), the Canadian Pharmacogenomics Network for Drug Safety (CPNDS), and the French National Network (Réseau) of Pharmacogenetics (RNPGx) . Each assembly has a distinctive profile and a different approach. Until now, CPIC emitted 26 guidelines providing guidance only for patients that already have PGx data, unlike DPWG, CPNDS, and RNPGx, which advise performing genotype testing in daily clinical practice, with RNPGx offering specific clinical characteristics for when it is suitable to test . Considering this discrepancy, an important question is being raised: when and how to test? Ideally, genotype testing would be performed before prescribing, providing a patient-oriented, personalized guide to drug selection and dosage, reducing adverse reactions, and avoiding ineffective treatments . This approach is called pre-emptive PGx and shows great potential for achieving genotype-based therapy . Usually, a panel with different variants associated with commonly used drugs is tested and the information obtained is recorded electronically, generating a digital PGx profile for each patient serving as a starting point for any following medical problems. However, PGx tests are largely unregulated, unstandardized, and difficult to interpret. Based on high-quality and consistent evidence, PGx tests need to have the same gene and allele content linked with recommendations regarding medication selection and dosing . Initiatives and efforts to harmonize PGx specialization worldwide have been made. The Association of Molecular Pathology (AMP), an international non-profit scientific society, provides information regarding the choice of alleles to be included for testing. The CPIC and DPWG are standardizing terms for PGx test results, in line with the Pharmacogene Variation Consortium (PharmVar), which also offers a universal PGx nomenclature. Pharmacogenomics Clinical Annotation Tool (PharmCAT) uses CPIC variants and assigns the corresponding allele for PGx test results interpretation, generating a PGx report. PharmGKB is an important resource for healthcare providers that aids in spreading information about genetic variants and their implications in drug response . Moreover, globally, in the last 20 years, PGx education has been incorporated in medical and pharmacy schools, although some refinement is still needed (e.g., more hours of PGx education to be adopted, PGx as an independent pharmacy specialty, a more interdisciplinary perspective for better integrating PGx information, and creating E-learning programs for low-developed countries) . 3.1. GPCR Pharmacogenetics Overview GPCRs are the largest superfamily of membrane proteins in the human genome with over 800 members . Currently, approximately 400 drugs approved by the FDA mediate their effect via 108 GPCRs. Moreover, novel GPCRs, which are not a target for any approved drug, are being investigated in clinical trials . Although functionally diverse, the 7TM receptors (seven-transmembrane receptors) share a common design: an extracellular N-terminus, a C-terminal cytoplasmic domain, and seven transmembrane domains (TM1-7) joined together by three extracellular loops (ECL1-3) and three intracellular loops (ICL1-3) . Upon ligand binding, a conformational shift takes place that favors the interaction with heterotrimeric G proteins. The Gα subunit has an intrinsic GTPase activity and transits between a GDP-bound inactive state and a GTP-bound active one. The GTP-bound Gα dissociates from the Gβγ subunit, triggering further cellular signaling. Based on the Gα subunit, G proteins are designated as Gα s , Gα i/o , Gα q/11 , and Gα 12/13 and regulate the activity of adenylyl cyclase, phospholipase C, or RhoGEFs . Another GPCR interacting partner is represented by β-arrestins, which mediate desensitization and receptor internalization. Furthermore, β-arrestin-dependent, G protein-signaling has been described , but it is still under investigation regarding the G protein implication . GPCR mutations were previously linked with disease occurrence, namely mutated rhodopsin receptor with retinitis pigmentosa , the calcium-sensing receptor with hypercalcemic syndromes , arginine vasopressin receptor 2 with nephrogenic diabetes insipidus , and many others . Fewer studies exist regarding drug response. GPCR drug targets can be largely affected by missense mutations, followed by LoF variations that comprise nonsense mutations resulting in introducing a stop codon or frameshift mutations and copy number variations (deletions/duplications) . The resulting functional implications can include decreased ligand binding, modified G protein selectivity or altered expression, localization, trafficking, or desensitization . The most targeted GPCRs, based on the number of approved drugs, are the adrenergic receptors, followed by the histamine receptors, serotonin, muscarinic, dopamine, and opioid receptors . β-blockers, opioid analgesics, selective serotonin reuptake inhibitors, and second-generation antipsychotics that regulate dopaminergic and serotoninergic systems are among the most prescribed drugs in the US . In the following subsections, the PGx of these receptors in correlation with drug reaction will be described and notable genetic variants are summarized in . 3.2. Antihypertension Therapy and β1-Adrenoreceptor Polymorphism Adrenergic receptors are responsible for the regulation of the sympathetic nervous system, responding to endogenous catecholamines: adrenaline and noradrenaline. Ubiquitously expressed throughout the body, the nine receptor subtypes (α1A, α1B, α1D, α2A, α2B, α2C, β1, β2, and β3) have a multitude of indications as drug targets , but available PGx data focus mainly on β1-adrenergic blockade. Gene variants associated with dysregulated β1-adrenergic signaling are rs1801252:S49G (N-terminus) and rs1801253:R389G (helix 8) . Greater reduction in blood pressure by metoprolol or carvedilol was observed for individuals with an R389R genotype than those who carried the R389G variant. Moreover, the S49R389/S49R389 diplotype can serve as a response predictor . The ADRB1 Ser49-Arg389 haplotype was associated with an increased risk of all-cause death, especially in individuals treated with verapamil, contrary to atenolol-treated individuals, indicating a protective role of the β1-blocker . However, atenolol produced a higher risk of adverse cardiovascular events in the case of individuals who have the ADRB1 S49G genotype . 3.3. Analgesics Treatment and µ 1 -Opioid Receptor Polymorphism The µ 1 -opioid receptor (MOP) belongs to the class of opioid receptors that binds encephalin and β-endorphin-mediating analgesics effects. The classical µ 1 -opioid receptor agonist is morphine, but more potent synthetic ligands have been developed, such as fentanyl and derivatives serving as first-line therapy in moderate to severe pain management . The most common SNP in the OPM1 gene is rs1799971:N40D, causing the loss of an N-glycosylation site , and being associated with a high sensitivity to opioids and increased susceptibility to opiate abuse . 3.4. Antipsychotic Drug Treatment and Dopamine/Serotonin Receptor Polymorphism The D1 class of dopamine receptors includes D1 and D5 receptors, while the D2 class of dopamine receptors is comprised of D2, D3, and receptors. Antipsychotic drugs, regardless of their generation (typical—first generation; atypical—second generation) act primarily as D2 receptor antagonists. Intensive D2 receptor blockade, being linked with adverse events, is predominant in the first generation of antipsychotics. The second generation of antipsychotics also blocks the serotonin receptors (5HT 1A , 5HT 2A/2C , 5HT 6 , and 5HT 7 ) . We focused mainly on atypical antipsychotics that act as dopamine–serotonin antagonists (clozapine, risperidone, olanzapine, and aripiprazole). DRD1 variants, rs4532 (A>48G) and rs5326 (G>94A), located in the 5′ UTR region are frequently studied . Previously, the DRD1 rs4532 variant was associated with decreased clozapine response , but a meta-analysis showed a lack of correlation . In the case of DRD2 polymorphisms, the most notable genetic variant is rs1799732: −141C Ins/Del, which is associated with reduced antipsychotic effects (Del allele compared with Ins/Ins genotype) . Risperidone treatment response may be altered by genetic variants of DRD2, such as rs1801028:S311C, 141C Ins/Del (rs1799732), T939C (rs6275), rs6277, and TaqID . While the rs6277 variant in DRD2 is associated with decreased response to ariprazole , the rs2514218 variant was correlated with better treatment response to aripiprazole or risperidone . The rs6280:S9G variant is the most studied variant of the DRD3 gene, affecting the N-terminal extracellular domain . This variant can serve as a predictor for olanzapine or clozapine responses . Atypical antipsychotic responses are similarly disturbed by mutated serotoninergic receptors. Risperidone efficacy was associated with rs6699866 in HTR6 , while polymorphic HTR2C , HTR3D, and HTR5A have insignificant effects . The HTR3B variant rs1176744:Y129S has an increased affinity for serotonin, with induction of dopamine signaling that opposes the antipsychotic effect, thus being associated with a decrease in treatment response . During clozapine or olanzapine treatment, the HTR2A haplotype (rs6311:1438A, rs6312:783A, rs6313:102T, and rs6314:1354T) is correlated with a lower risk of developing metabolic abnormalities . 3.5. Antidepressants and Serotonin Receptors Selective serotonin reuptake inhibitors (SSRIs) are the most prescribed antidepressants, indicated for major depressive disorder and anxiety-related disorders. Inhibition of the serotonin transporter (SERT) in the presynaptic cleft, results in reduced serotonin uptake that can bind to 5HT1A autoreceptors . Moreover, the SSRI vilazodone acts as a partial agonist for 5HT1A . Other antidepressants that modulate the serotoninergic transmission are serotonin–noradrenaline reuptake inhibitors (SNRIs). The HTR1A polymorphism in antidepressant response is still controversial; a few studies found no correlation , while others showed a better antidepressant response for patients expressing rs1364043, rs10042486, and rs6295 genetic variants when treated with milnacipran, fluvoxamine, fluoxetine, or paroxetine . The rs6295 allele can also serve as a treatment predictor for citalopram response in older patients . In the case of HTR2A , two common variants located in the promoter region have been widely studied, namely rs6311 and rs6313 . These variants have been linked with higher treatment response; an increased risk of adverse reactions was associated with rs6311, while the rs6313 variant appears to have a protective effect . A couple of studies reported no interdependency between the rs6311 and the antidepressant response , while for the rs6313 variant, a significant association was found . GPCRs are the largest superfamily of membrane proteins in the human genome with over 800 members . Currently, approximately 400 drugs approved by the FDA mediate their effect via 108 GPCRs. Moreover, novel GPCRs, which are not a target for any approved drug, are being investigated in clinical trials . Although functionally diverse, the 7TM receptors (seven-transmembrane receptors) share a common design: an extracellular N-terminus, a C-terminal cytoplasmic domain, and seven transmembrane domains (TM1-7) joined together by three extracellular loops (ECL1-3) and three intracellular loops (ICL1-3) . Upon ligand binding, a conformational shift takes place that favors the interaction with heterotrimeric G proteins. The Gα subunit has an intrinsic GTPase activity and transits between a GDP-bound inactive state and a GTP-bound active one. The GTP-bound Gα dissociates from the Gβγ subunit, triggering further cellular signaling. Based on the Gα subunit, G proteins are designated as Gα s , Gα i/o , Gα q/11 , and Gα 12/13 and regulate the activity of adenylyl cyclase, phospholipase C, or RhoGEFs . Another GPCR interacting partner is represented by β-arrestins, which mediate desensitization and receptor internalization. Furthermore, β-arrestin-dependent, G protein-signaling has been described , but it is still under investigation regarding the G protein implication . GPCR mutations were previously linked with disease occurrence, namely mutated rhodopsin receptor with retinitis pigmentosa , the calcium-sensing receptor with hypercalcemic syndromes , arginine vasopressin receptor 2 with nephrogenic diabetes insipidus , and many others . Fewer studies exist regarding drug response. GPCR drug targets can be largely affected by missense mutations, followed by LoF variations that comprise nonsense mutations resulting in introducing a stop codon or frameshift mutations and copy number variations (deletions/duplications) . The resulting functional implications can include decreased ligand binding, modified G protein selectivity or altered expression, localization, trafficking, or desensitization . The most targeted GPCRs, based on the number of approved drugs, are the adrenergic receptors, followed by the histamine receptors, serotonin, muscarinic, dopamine, and opioid receptors . β-blockers, opioid analgesics, selective serotonin reuptake inhibitors, and second-generation antipsychotics that regulate dopaminergic and serotoninergic systems are among the most prescribed drugs in the US . In the following subsections, the PGx of these receptors in correlation with drug reaction will be described and notable genetic variants are summarized in . Adrenergic receptors are responsible for the regulation of the sympathetic nervous system, responding to endogenous catecholamines: adrenaline and noradrenaline. Ubiquitously expressed throughout the body, the nine receptor subtypes (α1A, α1B, α1D, α2A, α2B, α2C, β1, β2, and β3) have a multitude of indications as drug targets , but available PGx data focus mainly on β1-adrenergic blockade. Gene variants associated with dysregulated β1-adrenergic signaling are rs1801252:S49G (N-terminus) and rs1801253:R389G (helix 8) . Greater reduction in blood pressure by metoprolol or carvedilol was observed for individuals with an R389R genotype than those who carried the R389G variant. Moreover, the S49R389/S49R389 diplotype can serve as a response predictor . The ADRB1 Ser49-Arg389 haplotype was associated with an increased risk of all-cause death, especially in individuals treated with verapamil, contrary to atenolol-treated individuals, indicating a protective role of the β1-blocker . However, atenolol produced a higher risk of adverse cardiovascular events in the case of individuals who have the ADRB1 S49G genotype . 1 -Opioid Receptor Polymorphism The µ 1 -opioid receptor (MOP) belongs to the class of opioid receptors that binds encephalin and β-endorphin-mediating analgesics effects. The classical µ 1 -opioid receptor agonist is morphine, but more potent synthetic ligands have been developed, such as fentanyl and derivatives serving as first-line therapy in moderate to severe pain management . The most common SNP in the OPM1 gene is rs1799971:N40D, causing the loss of an N-glycosylation site , and being associated with a high sensitivity to opioids and increased susceptibility to opiate abuse . The D1 class of dopamine receptors includes D1 and D5 receptors, while the D2 class of dopamine receptors is comprised of D2, D3, and receptors. Antipsychotic drugs, regardless of their generation (typical—first generation; atypical—second generation) act primarily as D2 receptor antagonists. Intensive D2 receptor blockade, being linked with adverse events, is predominant in the first generation of antipsychotics. The second generation of antipsychotics also blocks the serotonin receptors (5HT 1A , 5HT 2A/2C , 5HT 6 , and 5HT 7 ) . We focused mainly on atypical antipsychotics that act as dopamine–serotonin antagonists (clozapine, risperidone, olanzapine, and aripiprazole). DRD1 variants, rs4532 (A>48G) and rs5326 (G>94A), located in the 5′ UTR region are frequently studied . Previously, the DRD1 rs4532 variant was associated with decreased clozapine response , but a meta-analysis showed a lack of correlation . In the case of DRD2 polymorphisms, the most notable genetic variant is rs1799732: −141C Ins/Del, which is associated with reduced antipsychotic effects (Del allele compared with Ins/Ins genotype) . Risperidone treatment response may be altered by genetic variants of DRD2, such as rs1801028:S311C, 141C Ins/Del (rs1799732), T939C (rs6275), rs6277, and TaqID . While the rs6277 variant in DRD2 is associated with decreased response to ariprazole , the rs2514218 variant was correlated with better treatment response to aripiprazole or risperidone . The rs6280:S9G variant is the most studied variant of the DRD3 gene, affecting the N-terminal extracellular domain . This variant can serve as a predictor for olanzapine or clozapine responses . Atypical antipsychotic responses are similarly disturbed by mutated serotoninergic receptors. Risperidone efficacy was associated with rs6699866 in HTR6 , while polymorphic HTR2C , HTR3D, and HTR5A have insignificant effects . The HTR3B variant rs1176744:Y129S has an increased affinity for serotonin, with induction of dopamine signaling that opposes the antipsychotic effect, thus being associated with a decrease in treatment response . During clozapine or olanzapine treatment, the HTR2A haplotype (rs6311:1438A, rs6312:783A, rs6313:102T, and rs6314:1354T) is correlated with a lower risk of developing metabolic abnormalities . Selective serotonin reuptake inhibitors (SSRIs) are the most prescribed antidepressants, indicated for major depressive disorder and anxiety-related disorders. Inhibition of the serotonin transporter (SERT) in the presynaptic cleft, results in reduced serotonin uptake that can bind to 5HT1A autoreceptors . Moreover, the SSRI vilazodone acts as a partial agonist for 5HT1A . Other antidepressants that modulate the serotoninergic transmission are serotonin–noradrenaline reuptake inhibitors (SNRIs). The HTR1A polymorphism in antidepressant response is still controversial; a few studies found no correlation , while others showed a better antidepressant response for patients expressing rs1364043, rs10042486, and rs6295 genetic variants when treated with milnacipran, fluvoxamine, fluoxetine, or paroxetine . The rs6295 allele can also serve as a treatment predictor for citalopram response in older patients . In the case of HTR2A , two common variants located in the promoter region have been widely studied, namely rs6311 and rs6313 . These variants have been linked with higher treatment response; an increased risk of adverse reactions was associated with rs6311, while the rs6313 variant appears to have a protective effect . A couple of studies reported no interdependency between the rs6311 and the antidepressant response , while for the rs6313 variant, a significant association was found . EGFR, known as ErbB1/HER1, belongs to EGFR family together with ErbB2/HER2, ErbB3/HER3, and ErbB4/HER4. We chose EGFR as an example for receptor PGx-informed therapy because (1) it is expressed in different cancer types, (2) its mutational landscape is well characterized, (3) aberrantly expressed EGFR is targeted by the majority of tyrosine kinase inhibitors (TKIs), and (4) for a drug target, it has a significant amount of PGx data . EGFR signaling plays a pivotal role in regulating essential biological processes, including cell proliferation, differentiation, migration, adhesion, and survival . As a membrane-spanning receptor, EGFR connects extracellular stimuli to the intracellular signaling pathways. Structurally, EGFR features four extracellular domains (I, II, III, and IV), one transmembrane domain, one juxtamembrane domain, and one tyrosine kinase domain, followed by a flexible regulatory C-terminus . Under resting conditions, EGFR exists as a monomer. Upon ligand binding to the extracellular domain, dimerization of the receptor occurs, followed by autophosphorylation and initiation of a downstream signaling cascade, namely the rat sarcoma (Ras)/mitogen-activated protein kinase (MAPK) and phosphoinositide 3-kinase (PI3K)/protein kinase B (PKB) pathways . Renowned as an oncogene, the aberrantly expressed EGFR represents a malignancy hallmark, being more prevalent in lung, colorectal, and brain cancers . Usually, EGFR oncogenicity is characterized by overexpression, gene amplification, or activating mutations . The majority of the mutations occur within the EGFR exons 18 to 21, which encode a portion of the EGFR kinase domain . The purpose of EGFR-targeted therapy is to detect EGFR-positive cancers and existent mutations and to combat frequently acquired drug resistance. Deviant EGFR is managed by two treatment directions, tyrosine kinase inhibitors (TKIs) and monoclonal antibodies (Mabs) (as described in , which contains information from the FDA Table of Pharmacogenomic Biomarkers in Drug Labeling). Among the approved EGFR-TKIs, gefitinib and erlotinib belong to the first generation of EGFR inhibitors. They work by binding reversibly to the TK domain of the EGFR, blocking ATP binding, thereby halting EGFR activation and cellular proliferation. Afatinib and dacomitinib are members of the second generation of EGFR-TKIs, binding irreversibly to the EGFR, thus inhibiting its kinase activity, while osimertinib binds covalently to the cysteine residue in the EGFR . Mobocertinib is designed to target EGFR exon 20 insertions . Monoclonal antibodies (cetuximab, panitumumab, and amivantamab) bind to the extracellular domain III of EGFR, promoting ligand blocking, receptor–antibody complex internalization, and degradation . Of note, kinase-activating mutations cannot disturb the activity of these antibodies since their binding site is in the extracellular domain. According to PharmGKB, the EGFR is a Very Important Pharmacogene (VIP). Currently, on the PharmGKB database, there are 21 clinical annotations regarding EGFR , 13 of them involving the interaction between EGFR inhibitors and their target . The FDA Table of Pharmacogenomic Biomarkers in Drug Labeling offers a perspective on available EGFR-targeted drugs and their PGx information, being indication-oriented (using an FDA-approved test to identify the specific drug targeted mutation for treatment). The clinical annotations from PharmGKB resemble the FDA Table of Pharmacogenetic Associations, offering information about the interaction between a genetic variant and a drug. Each pair contains the “Details” section where a specific allele is linked with an increased or decreased response, adverse reaction, or even with the likelihood of acquired resistance. Furthermore, the level of evidence is provided, with level 3 being the most assigned. In contrast, the FDA Table of Pharmacogenetic Associations contains gene–drug interactions that impact drug metabolism and have sufficient scientific evidence. At the moment, EGFR is not included in this table. Therefore, healthcare providers should take into account all available resources to guide and treat each patient individually based on their genetic makeup and to monitor therapy outcomes. The major indication of EGFR-targeted therapy is represented by non-small-cell lung cancer (NSCLC). In NSCLC, the effectiveness of conventional chemotherapy treatments does not exceed 40%, being also associated with a high degree of toxicity and poor prognosis. Abnormal EGFR signaling contributes to the oncogenic phenotype in over half of NSCLC patients. TKIs interact with the aberrantly expressed receptor, significantly increasing the survival rate of patients and exceeding classical chemotherapy response levels in NSCLC patients . Of all EGFR mutations, 45% are deletions or insertions (amino-acid residues 747 to 752) of exon 19, with the most common being delE746_A750. Additionally, the exon 21 point mutation L858R accounts for roughly 40% of EGFR mutations. The third most common type (10% of all EGFR mutations) consists of in-frame insertions and indels in exon 20 . Exons 18, 19, and 21 mutations are sensitivity predictors of EGFR-TKI therapy, while mutations in exon 20 are generally resistant . The emergence of EGFR T790M and C797S mutations (exon 20) has led to rapid resistance development against first-, second-, and third-generation EGFR-TKIs. The C797S mutation specifically hinders irreversible EGFR inhibitors’ ability to bind covalently to the kinase. Additionally, the rise of EGFR double and triple mutations poses challenges to the therapeutic efficacy of EGFR-TKIs, emphasizing the continuous necessity for potent new inhibitors . While promising results are limited for other types of cancer besides NSCLC, erlotinib in combination with gemcitabine received FDA approval for use in the treatment of locally advanced, unresectable, or metastatic pancreatic cancer on 2 November 2005. In pancreatic cancer, EGFR overexpression is correlated with advanced disease, poor survival, and metastasis. Combining EGFR inhibitors with chemotherapy results in the inhibition of tumor-induced angiogenesis, promotion of tumor cell apoptosis, and tumor regression in xenograft models . The non-frameshift deletion in exon 19 and the L858R point mutation in exon 21 are known as EGFR -sensitive mutations and respond well to EGFR-TKIs . However, these mutations are rare in pancreatic ductal adenocarcinoma (PDAC) . Similarly, in colorectal cancer, the main contributor to poor prognosis is EGFR overexpression . Hence, blocking EGFR signaling is the ideal approach. Cetuximab and panitumumab are both indicated for EGFR-expressing, RAS wild-type metastatic colorectal cancer, serving as monotherapy or in combination with chemotherapy . Additionally, cetuximab is also indicated for head and neck cancer in combination with radiation therapy, antineoplastic agents, or alone for recurrent disease or metastasis . However, in this case, cetuximab therapy has a poor response rate, regardless of high EGFR gene amplification. Less than 5% of head and neck cancers present EGFR mutations, making this cancer type particularly difficult to treat . Mutations or genetic variations in EGFR alone cannot account for all the differences in cancer patients’ responses to EGFR-targeted treatments. Beyond the EGFR genotype, genetic variances in other components of the signaling pathway downstream of EGFR or in other receptor tyrosine kinase pathways can significantly affect the effectiveness of specific EGFR-targeted therapies , emphasizing the personalized medicine approach of treating individual patients based on their particular physiology and needs. In the digital era, personalized medicine can be achieved in order to provide patients with tailored therapies. By employing PGx tools, each patient can be treated based on their unique genetic makeup, reducing the risk of adverse reactions or treatment failure. However, it is still unclear when the PGx tests are recommended and how to apply and interpret them. Despite great efforts, the implementation of PGx-informed therapy is still incomplete. Moreover, the healthcare authorities (FDA and EMA) issued guidelines regarding only pharmacokinetic genetic variants due to a lack of sufficient evidence. One problem may be the inconsistency noticed between different studies that analyzed the same gene–drug pair. Thus, standardization is also an absolute requirement to further advance the field of PGx. In the case of commonly prescribed drugs, genetic variants that affect drug response are still understudied. Here, we used an RTK EGFR as a prototype for emphasizing the crucial role of determining receptors’ genetic influence on drug response. As more than one-third of approved drugs target GPCRs, a complete mutational landscape must be generated. We focused only on genetic variants that affect targets, without including studies evaluating the downstream signaling partners. Cellular signaling functions as an interconnected machinery. Hence, the net drug response may be a consequence of different altered proteins from the same pathway or from intersected pathways.
Positionspapier der ÖGR und ÖGP zur Diagnose und Therapie der Sarkoidose 2024
4f9c7c03-f4a8-47b1-bdb9-eaa9d36db251
11464578
Internal Medicine[mh]
Dieses Positionspapier der Österreichischen Gesellschaft für Pneumologie (ÖGP) und Österreichischen Gesellschaft für Rheumatologie (ÖGR) richtet sich an alle medizinischen Fachdisziplinen. Interdisziplinäres Ziel ist es, einen aktuellen Überblick über die Epidemiologie, Pathogenese, Klinik, Diagnostik, Organmanifestationen und Therapiemöglichkeiten der Sarkoidose zu geben. Die Sarkoidose ist eine komplexe, polyätiologische, inflammatorisch-granulomatöse Multisystemerkrankung, deren Genese noch nicht vollständig geklärt ist. In bis zu 95 % aller Fälle liegt eine pulmonale Beteiligung vor. Das klinische Bild ist sehr variabel und reicht von asymptomatischen Verläufen bis hin zur manifesten Organschädigung . Die Prävalenzraten variieren in Abhängigkeit von Alter, Geschlecht, Ethnizität und Geografie: Weltweite Angaben reichen von 2,2 pro 100.000 Einwohner in Taiwan bis zu 46 pro 100.000 in Deutschland . Daten zur Prävalenz in Österreich liegen bis dato nicht vor. Das durchschnittliche Erkrankungsalter liegt bei 46 ± 15 Jahren , Frauen sind tendenziell häufiger und in höherem Lebensalter von Sarkoidose betroffen als Männer . Histopathologische Untersuchung In vielen Fällen wird eine Biopsie angestrebt, auch um andere Differenzialdiagnosen wie maligne Erkrankungen und Infektionen auszuschließen. Die Probe sollte von einer möglichst gut zugänglichen, betroffenen Stelle entnommen werden (z. B. Haut oder Lymphknoten) . Das histologische Markenzeichen der Sarkoidose ist das Granulom: Es ist meistens nicht nekrotisch, hat einen zentralen Kern aus Makrophagenaggregaten und vielkernigen Riesenzellen und eine äußere Schicht aus locker organisierten Lymphozyten, dendritischen Zellen und gelegentlich B‑Zellen . Bei Unklarheiten sollte eine Färbung auf säurefeste Stäbchen, eine Kultur auf Pilze und Mykobakterien und eine Mycobacterium-tuberculosis -PCR erfolgen (Abb. ). Bronchoalveoläre Lavage Die bronchoalveoläre Lavage (BAL) ist ein relativ einfaches, risikoarmes bronchoskopisches Verfahren, wobei eine Lymphozytose > 15 % und ein CD4:CD8-Verhältnis > 3,5 die Diagnose einer pulmonalen Sarkoidose unterstützt . Daneben sollte am gewonnenen Material auch eine Färbung auf säurefeste Stäbchen, eine konventionelle Kultur auf Bakterien und Pilze, eine Mykobakterienkultur und eine Mycobacterium-tuberculosis -PCR durchgeführt werden, um eine Infektion als alternative Diagnose auszuschließen. Blutbasierte Diagnostik Es existiert kein sicherer diagnostischer Laborbiomarker zur Erkennung der Sarkoidose oder ihrer Organmanifestationen, Laborbefunde können aber die Abklärung und Verlaufskontrollen gut komplementieren. Die Tab. gibt aktuelle Empfehlungen hierzu wieder . Die höchste Evidenz besteht für die Testung von Serumkalzium initial und auch im Verlauf, da eine Hyperkalzämie unmittelbare therapeutische Konsequenzen nach sich zieht . Speziellere Laborbiomarker wie „angiotensin-converting enzyme“ (ACE), löslicher Interleukin-2-Rezeptor oder Neopterin sind zwar alleine in der Diagnostik unspezifisch, können aber in unklaren Situationen hilfreich sein und – vor allem wenn initial erhöht – als zusätzliche Verlaufsparameter dienen . Radiologische Diagnostik Die initiale Evaluierung erfolgt in der Regel mittels Thorax-Röntgen, dessen Sensitivität jedoch in frühen oder subklinischen Krankheitsstadien limitiert ist. Die pulmonale Sarkoidose wird historisch thoraxradiographisch in Stadien nach Scadding eingeteilt: Stadium I und II umfassen eine mediastinale/hiläre Lymphadenopathie (LAP) mit/ohne Lungenparenchymbefall, Stadium III eine Lungenparenchymbeteiligung ohne begleitende Lymphadenopathie und das Stadium IV die Ausbildung einer Lungenfibrose („sarcoidosis-associated pulmonary fibrosis“ [SAPF]) . Die Computertomographie (CT) des Thorax ist ein wesentlich präziseres diagnostisches Werkzeug, welches eine detaillierte Darstellung granulomatöser Manifestationen sowie lymphatischer Involvierung intra- und extrathorakal ermöglicht (Abb. und ). In der CT werden Lungenparenchymveränderungen bei Sarkoidose in nichtfibrotische und fibrotische Veränderungen unterteilt, wobei nichtfibrotische Lungenparenchymveränderungen sich als multiple peribronchovaskuläre, perifissurale oder subpleurale Mikronoduli, als multiple größere peribronchovaskuläre Noduli, als verstreute größere Knötchen oder als Konsolidierung als die vorherrschende oder einzige Anomalie manifestieren . Eine Lungenfibrose bei Sarkoidose kann sich mit bronchozentrischen Retikulationen mit oder ohne dichte parenchymale Konsolidierungen, mit oder ohne Kavernenbildung oder als große bronchozentrische Konsolidierungen manifestieren . Vor allem eine bihiläre Lymphadenopathie in Kombination mit mikronodulären Parenchymveränderungen mit perilymphatischer Anordnung (Beteiligung der Pleura und der Fissuren) ist bei entsprechendem klinischem Kontext hochgradig suggestiv für die pulmonale Sarkoidose . 18 F-FDG-PET/CT und andere spezielle Bildgebung Die 18 Fluor-Fluordesoxyglucose-Positronenemissionstomographie in Kombination mit der CT (18F-FDG-PET/CT) gewinnt in speziellen Indikationen bei Sarkoidose an Bedeutung . Die aktuelle Leitlinie der American Thoracic Society (ATS) zur Diagnostik der Sarkoidose empfiehlt die Durchführung einer nach Fasting-Protokoll vorbereiteten PET/CT bei Patienten mit suspizierter Herzbeteiligung, wenn eine Herz-Magnetresonanztomographie (MRT) nicht möglich oder inkonklusiv ist . In Fällen von pulmonaler Sarkoidose ermöglicht die 18F-FDG-PET/CT eine hochsensitive Darstellung der Entzündungsaktivität der Lunge, der mediastinalen Lymphknoten sowie der extrathorakalen Beteiligung inklusive Identifikation möglicher leicht zugänglicher Biopsiestellen. Außerdem kann insbesondere bei Multiorgansarkoidose und in therapierefraktären Fällen das Therapieansprechen beurteilt werden . Bei speziellen Fragestellungen kommt je nach Organbefall auch die MRT zum Einsatz, z. B. in der Abklärung der kardialen Sarkoidose, wobei das Auftreten eines Late-Gadolinium-Enhancements (LGE) das Vorhandensein fibrotischer sowie entzündlicher Prozesse im Myokard signalisiert . In vielen Fällen wird eine Biopsie angestrebt, auch um andere Differenzialdiagnosen wie maligne Erkrankungen und Infektionen auszuschließen. Die Probe sollte von einer möglichst gut zugänglichen, betroffenen Stelle entnommen werden (z. B. Haut oder Lymphknoten) . Das histologische Markenzeichen der Sarkoidose ist das Granulom: Es ist meistens nicht nekrotisch, hat einen zentralen Kern aus Makrophagenaggregaten und vielkernigen Riesenzellen und eine äußere Schicht aus locker organisierten Lymphozyten, dendritischen Zellen und gelegentlich B‑Zellen . Bei Unklarheiten sollte eine Färbung auf säurefeste Stäbchen, eine Kultur auf Pilze und Mykobakterien und eine Mycobacterium-tuberculosis -PCR erfolgen (Abb. ). Die bronchoalveoläre Lavage (BAL) ist ein relativ einfaches, risikoarmes bronchoskopisches Verfahren, wobei eine Lymphozytose > 15 % und ein CD4:CD8-Verhältnis > 3,5 die Diagnose einer pulmonalen Sarkoidose unterstützt . Daneben sollte am gewonnenen Material auch eine Färbung auf säurefeste Stäbchen, eine konventionelle Kultur auf Bakterien und Pilze, eine Mykobakterienkultur und eine Mycobacterium-tuberculosis -PCR durchgeführt werden, um eine Infektion als alternative Diagnose auszuschließen. Es existiert kein sicherer diagnostischer Laborbiomarker zur Erkennung der Sarkoidose oder ihrer Organmanifestationen, Laborbefunde können aber die Abklärung und Verlaufskontrollen gut komplementieren. Die Tab. gibt aktuelle Empfehlungen hierzu wieder . Die höchste Evidenz besteht für die Testung von Serumkalzium initial und auch im Verlauf, da eine Hyperkalzämie unmittelbare therapeutische Konsequenzen nach sich zieht . Speziellere Laborbiomarker wie „angiotensin-converting enzyme“ (ACE), löslicher Interleukin-2-Rezeptor oder Neopterin sind zwar alleine in der Diagnostik unspezifisch, können aber in unklaren Situationen hilfreich sein und – vor allem wenn initial erhöht – als zusätzliche Verlaufsparameter dienen . Die initiale Evaluierung erfolgt in der Regel mittels Thorax-Röntgen, dessen Sensitivität jedoch in frühen oder subklinischen Krankheitsstadien limitiert ist. Die pulmonale Sarkoidose wird historisch thoraxradiographisch in Stadien nach Scadding eingeteilt: Stadium I und II umfassen eine mediastinale/hiläre Lymphadenopathie (LAP) mit/ohne Lungenparenchymbefall, Stadium III eine Lungenparenchymbeteiligung ohne begleitende Lymphadenopathie und das Stadium IV die Ausbildung einer Lungenfibrose („sarcoidosis-associated pulmonary fibrosis“ [SAPF]) . Die Computertomographie (CT) des Thorax ist ein wesentlich präziseres diagnostisches Werkzeug, welches eine detaillierte Darstellung granulomatöser Manifestationen sowie lymphatischer Involvierung intra- und extrathorakal ermöglicht (Abb. und ). In der CT werden Lungenparenchymveränderungen bei Sarkoidose in nichtfibrotische und fibrotische Veränderungen unterteilt, wobei nichtfibrotische Lungenparenchymveränderungen sich als multiple peribronchovaskuläre, perifissurale oder subpleurale Mikronoduli, als multiple größere peribronchovaskuläre Noduli, als verstreute größere Knötchen oder als Konsolidierung als die vorherrschende oder einzige Anomalie manifestieren . Eine Lungenfibrose bei Sarkoidose kann sich mit bronchozentrischen Retikulationen mit oder ohne dichte parenchymale Konsolidierungen, mit oder ohne Kavernenbildung oder als große bronchozentrische Konsolidierungen manifestieren . Vor allem eine bihiläre Lymphadenopathie in Kombination mit mikronodulären Parenchymveränderungen mit perilymphatischer Anordnung (Beteiligung der Pleura und der Fissuren) ist bei entsprechendem klinischem Kontext hochgradig suggestiv für die pulmonale Sarkoidose . F-FDG-PET/CT und andere spezielle Bildgebung Die 18 Fluor-Fluordesoxyglucose-Positronenemissionstomographie in Kombination mit der CT (18F-FDG-PET/CT) gewinnt in speziellen Indikationen bei Sarkoidose an Bedeutung . Die aktuelle Leitlinie der American Thoracic Society (ATS) zur Diagnostik der Sarkoidose empfiehlt die Durchführung einer nach Fasting-Protokoll vorbereiteten PET/CT bei Patienten mit suspizierter Herzbeteiligung, wenn eine Herz-Magnetresonanztomographie (MRT) nicht möglich oder inkonklusiv ist . In Fällen von pulmonaler Sarkoidose ermöglicht die 18F-FDG-PET/CT eine hochsensitive Darstellung der Entzündungsaktivität der Lunge, der mediastinalen Lymphknoten sowie der extrathorakalen Beteiligung inklusive Identifikation möglicher leicht zugänglicher Biopsiestellen. Außerdem kann insbesondere bei Multiorgansarkoidose und in therapierefraktären Fällen das Therapieansprechen beurteilt werden . Bei speziellen Fragestellungen kommt je nach Organbefall auch die MRT zum Einsatz, z. B. in der Abklärung der kardialen Sarkoidose, wobei das Auftreten eines Late-Gadolinium-Enhancements (LGE) das Vorhandensein fibrotischer sowie entzündlicher Prozesse im Myokard signalisiert . Die granulomatöse Entzündung der Sarkoidose wird als dysregulierte Immunreaktion auf noch unbekannte Antigene aus der Umwelt bei genetisch anfälligen Personen angesehen . Ein erhöhtes Risiko für Sarkoidose wurde bei Menschen mit Exposition gegenüber Insektiziden, Schimmelpilzen, Metall-, anorganischen und organischen Stäuben sowie in Berufen wie der Brandbekämpfung und der Landwirtschaft identifiziert . Mikroben wie Cutibacterium acnes und verschiedene Mykobakterienarten wurden ebenfalls mit der Krankheit in Verbindung gebracht . Bei positiver Familienanamnese besteht ein 2‑ bis 4faches Risiko, die Krankheit zu entwickeln . Varianten von Genen, die an der Antigenpräsentation beteiligt sind (HLA-Klasse II), und andere Gene wie BTNL2 und Tumor-Nekrose-Faktor‑α (TNF-α) wurden mit Sarkoidose assoziiert . Diese Dysregulationen in der angeborenen Immunantwort können zur Persistenz von epitheloidzelligen Granulomen führen, die dann als Brennpunkt für weitere Inflammation und Entwicklung von Fibrose fungieren können . Der „mammalian target of rapamycin complex“(mTORC1)-Stoffwechselweg hält beispielsweise die Granulombildung aufrecht . Die Ansammlung von CD4-T-Zellen in den betroffenen Organen ist ein weiteres Kennzeichen der Sarkoidose, und Zytokine wie TNF‑α, Interferon‑γ (IFN-γ), IL‑6 und transformierender Wachstumsfaktor (TGF-β) werden ebenfalls hochreguliert . Sarkoidose präsentiert sich in unterschiedlichsten Ausprägungen und Verlaufsformen – vom asymptomatischen Zufallsbefund, dem akuten, oft selbstlimitierenden Löfgren-Syndrom bis hin zur chronischen Multiorganform und zum plötzlichen Herztod. Trotzdem hat sich eine einfache Regel etabliert: Behandeln, um Organschäden zu vermeiden oder um die Lebensqualität zu verbessern . Die Einschätzung der Organgefährdung erfordert einerseits klinische Untersuchungen, andererseits auch die Befragung des Patienten, speziell im Hinblick auf kardiale Symptome (z. B. Synkopen, Palpitationen). Die Lebensqualität wird häufig auch durch nicht organspezifische Symptome wie Fatigue beeinflusst . Ein strukturierter Untersuchungsplan und ein definiertes Behandlungsziel sind für die gemeinsame Entscheidung zur Therapie hilfreich . Während sich granulombedingte Probleme meist gut medikamentös behandeln lassen, beeinträchtigen Fatigue sowie Gelenk- und Muskelprobleme selbst gut therapierte Patienten oft noch beträchtlich . Naheliegenderweise werden deshalb in Befragungen von Patienten das Funktionieren im Alltag und Lebensqualität als wichtigste Therapieziele genannt . Problematisch wird von Patienten auch das Thema der oft nötigen, aber nebenwirkungsträchtigen Steroidtherapie wahrgenommen. Der subjektive Begriff Lebensqualität kann über sog. PROMs („patient-reported outcome measures“) objektiviert werden. Hierfür existieren verschiedene Fragebögen, wie z. B. der King’s Sarcoidosis Questionnaire , welche allerdings eher in Studiensettings als in der täglichen Praxis Anwendung finden. Für die Klinik gilt aber jedenfalls, dass nicht nur organspezifische Symptome abgefragt werden sollen, sondern auch nichtorganbezogene Symptome wie Fatigue, Schmerzen, alltägliche kognitive Störungen, Small-fiber-Neuropathie, Bewegungseinschränkung und Depressionen . Die Wertschätzung der patientenberichteten Symptome ist wichtig: So wird schwere Dyspnoe im Vergleich selten von Lungenpatienten, aber häufiger von Patienten mit kardiologischer Beteiligung berichtet . Fatigue bleibt bei 2 von 3 Patienten auch nach Verschwinden aller organspezifischen Symptome bestehen. Begleitende Gedächtnisprobleme und depressive Symptome prognostizieren Fatigue, eine Lungenbeteiligung per se tut es nicht . Ein kürzlich vorgestelltes Betreuungsmodell umfasst 5 Stufen : die Beurteilung der Symptome und Bedürfnisse des Patienten, die Stärkung des Patienten durch Unterstützung und Aufklärung, die Behandlung von Beschwerden und Komorbiditäten, die medikamentöse Behandlung der Organmanifestationen, die Behandlung und Einbindung extrapulmonaler Spezialisten bei Bedarf. Die Aufklärung und Schulung sind die Voraussetzung für eine qualifizierte Mitentscheidung des Patienten zur Therapie, aber auch eine notwendige Qualifikation zur Darstellung und Diskussion von Symptomen und ein Mittel zur Verbesserung der Lebensqualität. Auf dieser Basis soll das Behandlungsziel, individuell an Klinik und Risikokonstellation angepasst, zwischen Patient und Behandler definiert werden. Die Indikation zur Behandlung hängt von 2 Hauptfaktoren ab: (1) Risiko für Tod oder Organschaden und (2) Beeinträchtigung der Lebensqualität durch Symptome . Die Gesamtmortalität der Sarkoidose liegt bei etwa 5 % . Die kardiale sowie pulmonale Beteiligung inklusive pulmonaler Hypertonie und Lungenfibrose stellen die häufigsten Todesursachen dar . Die aktuell empfohlene Therapie – unabhängig vom Manifestationsort – versteht sich als Stufentherapie (Abb. ), beginnend mit Glukokortikoiden als „first-line“ . Glukokortikoide Dosis: Prednison/Äquivalent 20–40 mg/Tag, in absteigender Dosierung. Primäres Therapeutikum der Wahl ist ein Glukokortikoid in der niedrigsten möglichen Dosierung und Therapiedauer. Eine Langzeitanwendung ist mit erheblicher Organ- und Systemtoxizität verbunden . Daher empfiehlt ein rezenter Delphi-Konsens eine Begrenzung der anfänglichen Kortikosteroidtherapie auf 3 bis 6 Monate mit dem Ziel, nach klinischem Ansprechen auf eine Erhaltungsdosis von < 10 mg Prednison/Tag zu reduzieren . Im Falle einer Langzeitkortisontherapie (> 3 Monate) werden eine ausführliche Aufklärung über das Nebenwirkungsprofil und eine Osteoporoseabklärung empfohlen . Bei Normokalzämie sollte entsprechend den allgemeinen Empfehlungen auf eine ausreichende Kalzium- (1000–1200 mg/Tag) und Vitamin-D-Zufuhr (Serum-Vitamin-D3-Ziel ≥ 30–50 ng/ml) geachtet werden , bei Mangelzuständen soll unter regelmäßiger Laborkontrolle substituiert werden. Eine Kalziumsubstitution bei Sarkoidose sollte immer unter regelmäßigen Laborkontrollen erfolgen. Dosis: Prednison/Äquivalent 20–40 mg/Tag, in absteigender Dosierung. Primäres Therapeutikum der Wahl ist ein Glukokortikoid in der niedrigsten möglichen Dosierung und Therapiedauer. Eine Langzeitanwendung ist mit erheblicher Organ- und Systemtoxizität verbunden . Daher empfiehlt ein rezenter Delphi-Konsens eine Begrenzung der anfänglichen Kortikosteroidtherapie auf 3 bis 6 Monate mit dem Ziel, nach klinischem Ansprechen auf eine Erhaltungsdosis von < 10 mg Prednison/Tag zu reduzieren . Im Falle einer Langzeitkortisontherapie (> 3 Monate) werden eine ausführliche Aufklärung über das Nebenwirkungsprofil und eine Osteoporoseabklärung empfohlen . Bei Normokalzämie sollte entsprechend den allgemeinen Empfehlungen auf eine ausreichende Kalzium- (1000–1200 mg/Tag) und Vitamin-D-Zufuhr (Serum-Vitamin-D3-Ziel ≥ 30–50 ng/ml) geachtet werden , bei Mangelzuständen soll unter regelmäßiger Laborkontrolle substituiert werden. Eine Kalziumsubstitution bei Sarkoidose sollte immer unter regelmäßigen Laborkontrollen erfolgen. In symptomatischen Fällen, in denen die Krankheit trotz Kortikosteroidtherapie fortbesteht, inakzeptable Steroidnebenwirkungen auftreten oder ein hohes Risiko für Mortalität oder bleibende Organschäden besteht, ist eine steroidsparende, nichtbiologische immunsuppressive Zweitlinientherapie indiziert . Methotrexat (MTX) Dosierung: 10–15 mg wöchentlich, dazu Folsäure 5 mg 1‑ bis 2‑mal wöchentlich . Die Dosis von Methotrexat (MTX) kann bei Bedarf und akzeptablem Nebenwirkungsprofil auf 20 mg oder mehr erhöht werden. Es wird empfohlen, MTX mit Folsäure zu kombinieren . Subkutanes MTX wird bei ungenügendem klinischem Ansprechen und/oder gastrointestinalen Nebenwirkungen empfohlen . Verlässliche Empfängnisverhütung ist angezeigt! Azathioprin (AZA) Dosierung: 50–250 mg täglich . Eine genetische Analyse der TPMT-Allele vor Einleitung kann helfen, die Verträglichkeit vorherzusagen, Mittel der Wahl bei Kinderwunsch. Dosierung: 10–15 mg wöchentlich, dazu Folsäure 5 mg 1‑ bis 2‑mal wöchentlich . Die Dosis von Methotrexat (MTX) kann bei Bedarf und akzeptablem Nebenwirkungsprofil auf 20 mg oder mehr erhöht werden. Es wird empfohlen, MTX mit Folsäure zu kombinieren . Subkutanes MTX wird bei ungenügendem klinischem Ansprechen und/oder gastrointestinalen Nebenwirkungen empfohlen . Verlässliche Empfängnisverhütung ist angezeigt! Dosierung: 50–250 mg täglich . Eine genetische Analyse der TPMT-Allele vor Einleitung kann helfen, die Verträglichkeit vorherzusagen, Mittel der Wahl bei Kinderwunsch. Es besteht eine Vielzahl an Drittlinientherapeutika, wobei TNF-α-Inhibitoren die gängigsten Medikamente sind. Eine Auswahl ist hier genannt, für detaillierte Informationen verweisen wir auf die jeweiligen Fachgesellschaften und Leitlinien . Infliximab: Dosis 3–5 mg/kg i.v. initial und nach 2 Wochen; Wiederholung dann alle 4 bis 6 Wochen . Adalimumab: Dosis 40 mg alle 1 bis 2 Wochen s.c. . Nintedanib: Im Falle eines progressiv fibrosierenden Verlaufs einer SAPF soll eine antifibrotische Medikation mit Nintedanib erfolgen . Neue Erkenntnisse betreffen den JAK-STAT-Signalweg und die Typ-1-Immunität. Die Wirksamkeit von Tofacitinib (5–10 mg 2‑mal täglich) bei Patienten mit kutaner Sarkoidose ist umfassend nachgewiesen, wenn auch nur in kleinen Kohorten . In einer Reihe von Studien wurde eine aktive mTOR-Signalübertragung in Granulomen von Sarkoidosepatienten mit Lungen- , Herz- und Hautbefall nachgewiesen. Klinische Studien mit dem systemischen mTOR-Inhibitor Sirolimus haben eine Wirksamkeit bei Patienten mit kutaner Sarkoidose (6 mg 1‑malig und 2 mg oral/Tag über 4 Monate) mit einer lang anhaltenden Wirkung von mehr als einem Jahr nach der Behandlung und bei einem Lungensarkoidosepatienten (2 mg/Tag/10 Monate) mit einer Besserung der CT und der Hustensymptomatik gezeigt. Für alle hier genannten Substanzen gilt, dass noch weitere, größere klinische Studien nötig sind, bis sie als Routinetherapie für Sarkoidose empfohlen werden können. Die pulmonale Beteiligung stellt mit bis zu 95 % die häufigste Manifestation der Sarkoidose dar und ist oft durch eine mediastinale und bihiläre LAP charakterisiert. Eine Parenchymbeteiligung kann sich als typisch noduläres Muster mit perilymphatischer Verteilung bis hin zur irreversiblen Fibrose („sarcoidosis-associated pulmonary fibrosis“ [SAPF]) manifestieren . Mit zunehmendem Parenchymbefall nehmen die Spontanremissionsrate und die generelle Prognose ab . Die klinische Präsentation ist sehr variabel und bei Erstmanifestation teils oligo- bis asymptomatisch. Respiratorische Symptome wie Husten, Dyspnoe oder thorakales Druckgefühl, systemische Zeichen wie Fieber, B‑Symptomatik, Fatigue oder Symptome anderer Organmanifestationen können vorliegen . Die thorakale Beteiligung bei akuten Formen der Sarkoidose (beispielsweise Löfgren-Syndrom oder Heerfordt-Syndrom) weist mit bis zu 85 % eine hohe Rate an Spontanremissionen auf . Eine invasive Diagnostik ist hier in aller Regel bei typischer Klinik und Bildgebung nicht indiziert, jedoch eine engmaschige Verlaufskontrolle. Die Therapie besteht primär in einer symptomorientierten Behandlung, wie beispielsweise mit NSAR . Je nach Befallsmuster kann die Lungenfunktion unauffällig, obstruktiv oder restriktiv sein, sowie eine Diffusionsstörung aufweisen. Belastungsuntersuchungen wie der 6‑Minuten-Gehtest oder die Spiroergometrie können funktionell relevante pulmonale Beteiligungen demaskieren . Der Entscheidung zur histologischen Diagnosesicherung sollte eine Risiko-Nutzen-Abwägung sowie die Abschätzung der Wahrscheinlichkeit benigner und maligner Differenzialdiagnosen (z. B. Lymphom, Bronchialkarzinom, Tuberkulose, Silikose) vorausgehen. Eine reine LAP mit typischer Präsentation und ohne Therapieindikation kann durchaus ohne histologische Sicherung beobachtet werden, wobei eine Abwägung im Einzelfall erfolgen soll . Wenn eine invasive Abklärung notwendig erscheint, sollte diese bei LAP mittels endobronchialer ultraschallgezielter Lymphknotenbiopsie (EBUS-TBNA) erfolgen, eine lymphozytäre BAL mit CD4/CD8-Ratio > 3,5 kann die Diagnose unterstützen. Bei makroskopisch auffälliger Bronchialschleimhaut ist eine Zangenbiopsie empfohlen, Biopsien aus dem Lungenparenchym sind insgesamt nur selten nötig . Die Therapie von potenziell chronischen Formen der Sarkoidose abseits von akuter Organ- oder Lebensbedrohung sollte im Rahmen von „shared decision making“ mit den Patienten gemeinsam getroffen werden und ist von unterschiedlichen Faktoren abhängig: Für die Einleitung einer Therapie sprechen eine Einschränkung der Lebensqualität aufgrund von Symptomen sowie das Risiko einer Organschädigung z. B. bei Verlust von Lungenfunktion, ausgedehntem Lungenparenchymbefall und/oder Ausbildung von fibrotischen Veränderungen . Bis zu 30 % der Sarkoidosepatienten präsentieren sich mit einer Hautmanifestation, oft als Erstsymptom . Bei Hautsarkoidose wird einerseits zwischen akut und chronisch, andererseits zwischen spezifischen und unspezifischen Hautmanifestationen differenziert (Tab. ; ). Dies ist wichtig, da die Diagnose einer Hautsarkoidose nur bei spezifischen Hautmanifestationen aus der Hautbiopsie gesichert werden kann. Aufgrund der vielen unterschiedlichen Ausprägungen der Hautsarkoidose sollte ein Facharzt für Dermatologie zur Beurteilung herangezogen werden. Das Erythema nodosum ist eine unspezifische Hautmanifestation der akuten Verlaufsform der Sarkoidose, dem Löfgren-Syndrom und präsentiert sich mit livid-roten Knoten am häufigsten im Bereich der Schienbeine. Histologisch zeigen sich eine septale Pannikulitis, aber keine nackten Granulome. Die Diagnose eines Löfgren-Syndroms wird klinisch in Zusammenschau mit der weiteren Organsymptomatik (Erythema nodosum, bihiläre Lymphadenopathie, Arthritis) gestellt. Die Therapie der Hautsarkoidose wird bestimmt durch den Typ der Hautmanifestation, die Ausprägung, die Gefahr einer Narbenbildung, den Leidensdruck des Patienten und das Ausmaß der weiteren Organbeteiligung (Abb. ). Die Behandlung der Haut (z. B. durch topische Therapien) muss auch mitbedacht werden, wenn primär andere Organsysteme das Therapieregime bestimmen. Wird eine Systemtherapie begonnen, sollte diese für mindestens 3 Monate durchgeführt werden, bevor die Wirksamkeit bezüglich der Haut bewertet wird . Muskuloskeletale Manifestationen treten bei bis zu einem Drittel der Patienten auf, können komplexe Krankheitsbilder bedingen und reichen von Arthralgien bis zu destruktiven ossären Läsionen . Muskuloskeletale Beteiligungen zeigen sich in der Regel früh im Verlauf der Erkrankung. Verschiedene bildgebende Verfahren wie die 18F-FDG-PET/CT könnten sich als nützlich erweisen, um das Ausmaß der Beteiligung und die Aktivität der Krankheit zu erfassen . Bei der Behandlung der rheumatischen Sarkoidosemanifestationen haben bis dato Glukokortikoide einen zentralen Stellenwert inne. Methotrexat stellt das bevorzugte steroidsparende Agens dar, sofern keine renale Beteiligung oder andere Kontraindikationen vorliegen. Therapien mit Biologika, insbesondere TNF-α-Inhibitoren, werden in schwerwiegenden Fällen eingesetzt . Gelenkbeteiligung Akute Arthritis (Löfgren-Syndrom) Die häufigste muskuloskeletale Manifestation der Sarkoidose ist das Löfgren-Syndrom, welches die Triade aus symmetrischer hilärer Lymphadenopathie, (Peri‑)Arthritis und Erythema nodosum umfasst . Die Gelenkbeteiligung betrifft typischerweise beide Sprunggelenke, kann selten aber auch andere Regionen einschließlich der Knie‑, Hand‑, Ellenbogen- und Metakarpophalangealgelenke (MCP) betreffen . Sonographisch imponiert die betroffene Gelenkregion meist durch eine periartikuläre Weichteilschwellung und Tenosynovitis. Die Erkrankung tritt saisonal gehäuft insbesondere im Frühling auf und zeichnet sich durch eine vergleichsweise hohe Spontanremissionsrate und niedrige Rezidivwahrscheinlichkeit aus . Bei entsprechender klinischer Präsentation ist eine histologische Gewebssicherung in der Regel nicht notwendig , die Behandlung erfolgt symptomatisch mittels nichtsteroidaler Antirheumatika (NSARs). Chronische Arthritis Die chronische Form der Sarkoidose-assoziierten Arthritis geht oftmals mit anderen extrapulmonalen Manifestationen, insbesondere der Hautbeteiligung, einher . Charakteristisch ist eine symmetrische Oligoarthritis der mittleren bis großen Gelenke. Destruktiv verlaufende Arthritiden sind selten. Das klinische Bild der Jaccoud-Arthropathie, die sich durch eine deformierende, jedoch nicht erosive Arthritis auszeichnet, kann beispielsweise bei Patienten mit Organbeteiligung im fortgeschrittenen Krankheitsstadium beobachtet werden . Die Ausschlussdiagnose von Gicht, Kalziumpyrophosphatarthropathie (Pseudogicht) und septischer Arthritis ist wichtig. Eine Synovialbiopsie ist oftmals hilfreich, um ein granulomatöses Infiltrat nachzuweisen und andere Differenzialdiagnosen auszuschließen . Eine axiale, die Wirbelsäule respektive das Sakroiliakalgelenk affektierende Verlaufsform ist oft asymptomatisch – Läsionen werden oft während bildgebender Untersuchungen per Zufall detektiert . Obwohl die Sakroiliitis im Rahmen der Sarkoidose in der Regel unilateral auftritt, kann es ohne bioptische Abklärung schwierig sein, sie von Tuberkulose oder einer durch eine andere Infektion verursachten Sakroiliitis zu differenzieren. Eine weiterführende Abklärung und die Bestimmung von HLA-B27 kann hilfreich sein, um sie von einer axialen Spondyloarthritis (SpA) zu unterscheiden . Daktylitis Eine weitere Manifestationsform der Sarkoidose-assoziierten Arthropathie ist die Daktylitis, welche sonst typischerweise bei Psoriasisarthritis auftritt und durch plumpe, aufgetriebene, verdickte und gerötete „Wurstfinger“ charakterisiert ist . Sie ist gehäuft bei Patienten afrikanischer Abstammung und in Zusammenhang mit einer systemischen Beteiligung , typischerweise asymmetrisch an den zweiten und dritten Fingergliedern unter Aussparung der MCP-Gelenke lokalisiert . Histologisch finden sich Tenosynovitis und Granulome im Weichgewebe . Muskelbeteiligung Die Sarkoidose-assoziierte Myopathie mit Beteiligung der Skelettmuskulatur tritt histologisch in bis zur Hälfte aller Sarkoidosepatienten auf, jedoch hat nur ein Bruchteil (0,5–2 %) eine klassische Symptomatik . Das Beschwerdebild umfasst neben generalisierter Schwäche, Fatigue und reduzierter Leistungsfähigkeit auch Myalgien sowie eine proximal betonte Muskelschwäche . Die Unterscheidung zu anderen Muskelerkrankungen kann oftmals herausfordernd sein , insbesondere bei Myopathie unter Glukokortikoidtherapie. Eine Muskelbiopsie ist hier zur Differenzialdiagnose hilfreich . Akute Arthritis (Löfgren-Syndrom) Die häufigste muskuloskeletale Manifestation der Sarkoidose ist das Löfgren-Syndrom, welches die Triade aus symmetrischer hilärer Lymphadenopathie, (Peri‑)Arthritis und Erythema nodosum umfasst . Die Gelenkbeteiligung betrifft typischerweise beide Sprunggelenke, kann selten aber auch andere Regionen einschließlich der Knie‑, Hand‑, Ellenbogen- und Metakarpophalangealgelenke (MCP) betreffen . Sonographisch imponiert die betroffene Gelenkregion meist durch eine periartikuläre Weichteilschwellung und Tenosynovitis. Die Erkrankung tritt saisonal gehäuft insbesondere im Frühling auf und zeichnet sich durch eine vergleichsweise hohe Spontanremissionsrate und niedrige Rezidivwahrscheinlichkeit aus . Bei entsprechender klinischer Präsentation ist eine histologische Gewebssicherung in der Regel nicht notwendig , die Behandlung erfolgt symptomatisch mittels nichtsteroidaler Antirheumatika (NSARs). Chronische Arthritis Die chronische Form der Sarkoidose-assoziierten Arthritis geht oftmals mit anderen extrapulmonalen Manifestationen, insbesondere der Hautbeteiligung, einher . Charakteristisch ist eine symmetrische Oligoarthritis der mittleren bis großen Gelenke. Destruktiv verlaufende Arthritiden sind selten. Das klinische Bild der Jaccoud-Arthropathie, die sich durch eine deformierende, jedoch nicht erosive Arthritis auszeichnet, kann beispielsweise bei Patienten mit Organbeteiligung im fortgeschrittenen Krankheitsstadium beobachtet werden . Die Ausschlussdiagnose von Gicht, Kalziumpyrophosphatarthropathie (Pseudogicht) und septischer Arthritis ist wichtig. Eine Synovialbiopsie ist oftmals hilfreich, um ein granulomatöses Infiltrat nachzuweisen und andere Differenzialdiagnosen auszuschließen . Eine axiale, die Wirbelsäule respektive das Sakroiliakalgelenk affektierende Verlaufsform ist oft asymptomatisch – Läsionen werden oft während bildgebender Untersuchungen per Zufall detektiert . Obwohl die Sakroiliitis im Rahmen der Sarkoidose in der Regel unilateral auftritt, kann es ohne bioptische Abklärung schwierig sein, sie von Tuberkulose oder einer durch eine andere Infektion verursachten Sakroiliitis zu differenzieren. Eine weiterführende Abklärung und die Bestimmung von HLA-B27 kann hilfreich sein, um sie von einer axialen Spondyloarthritis (SpA) zu unterscheiden . Daktylitis Eine weitere Manifestationsform der Sarkoidose-assoziierten Arthropathie ist die Daktylitis, welche sonst typischerweise bei Psoriasisarthritis auftritt und durch plumpe, aufgetriebene, verdickte und gerötete „Wurstfinger“ charakterisiert ist . Sie ist gehäuft bei Patienten afrikanischer Abstammung und in Zusammenhang mit einer systemischen Beteiligung , typischerweise asymmetrisch an den zweiten und dritten Fingergliedern unter Aussparung der MCP-Gelenke lokalisiert . Histologisch finden sich Tenosynovitis und Granulome im Weichgewebe . Die häufigste muskuloskeletale Manifestation der Sarkoidose ist das Löfgren-Syndrom, welches die Triade aus symmetrischer hilärer Lymphadenopathie, (Peri‑)Arthritis und Erythema nodosum umfasst . Die Gelenkbeteiligung betrifft typischerweise beide Sprunggelenke, kann selten aber auch andere Regionen einschließlich der Knie‑, Hand‑, Ellenbogen- und Metakarpophalangealgelenke (MCP) betreffen . Sonographisch imponiert die betroffene Gelenkregion meist durch eine periartikuläre Weichteilschwellung und Tenosynovitis. Die Erkrankung tritt saisonal gehäuft insbesondere im Frühling auf und zeichnet sich durch eine vergleichsweise hohe Spontanremissionsrate und niedrige Rezidivwahrscheinlichkeit aus . Bei entsprechender klinischer Präsentation ist eine histologische Gewebssicherung in der Regel nicht notwendig , die Behandlung erfolgt symptomatisch mittels nichtsteroidaler Antirheumatika (NSARs). Die chronische Form der Sarkoidose-assoziierten Arthritis geht oftmals mit anderen extrapulmonalen Manifestationen, insbesondere der Hautbeteiligung, einher . Charakteristisch ist eine symmetrische Oligoarthritis der mittleren bis großen Gelenke. Destruktiv verlaufende Arthritiden sind selten. Das klinische Bild der Jaccoud-Arthropathie, die sich durch eine deformierende, jedoch nicht erosive Arthritis auszeichnet, kann beispielsweise bei Patienten mit Organbeteiligung im fortgeschrittenen Krankheitsstadium beobachtet werden . Die Ausschlussdiagnose von Gicht, Kalziumpyrophosphatarthropathie (Pseudogicht) und septischer Arthritis ist wichtig. Eine Synovialbiopsie ist oftmals hilfreich, um ein granulomatöses Infiltrat nachzuweisen und andere Differenzialdiagnosen auszuschließen . Eine axiale, die Wirbelsäule respektive das Sakroiliakalgelenk affektierende Verlaufsform ist oft asymptomatisch – Läsionen werden oft während bildgebender Untersuchungen per Zufall detektiert . Obwohl die Sakroiliitis im Rahmen der Sarkoidose in der Regel unilateral auftritt, kann es ohne bioptische Abklärung schwierig sein, sie von Tuberkulose oder einer durch eine andere Infektion verursachten Sakroiliitis zu differenzieren. Eine weiterführende Abklärung und die Bestimmung von HLA-B27 kann hilfreich sein, um sie von einer axialen Spondyloarthritis (SpA) zu unterscheiden . Eine weitere Manifestationsform der Sarkoidose-assoziierten Arthropathie ist die Daktylitis, welche sonst typischerweise bei Psoriasisarthritis auftritt und durch plumpe, aufgetriebene, verdickte und gerötete „Wurstfinger“ charakterisiert ist . Sie ist gehäuft bei Patienten afrikanischer Abstammung und in Zusammenhang mit einer systemischen Beteiligung , typischerweise asymmetrisch an den zweiten und dritten Fingergliedern unter Aussparung der MCP-Gelenke lokalisiert . Histologisch finden sich Tenosynovitis und Granulome im Weichgewebe . Die Sarkoidose-assoziierte Myopathie mit Beteiligung der Skelettmuskulatur tritt histologisch in bis zur Hälfte aller Sarkoidosepatienten auf, jedoch hat nur ein Bruchteil (0,5–2 %) eine klassische Symptomatik . Das Beschwerdebild umfasst neben generalisierter Schwäche, Fatigue und reduzierter Leistungsfähigkeit auch Myalgien sowie eine proximal betonte Muskelschwäche . Die Unterscheidung zu anderen Muskelerkrankungen kann oftmals herausfordernd sein , insbesondere bei Myopathie unter Glukokortikoidtherapie. Eine Muskelbiopsie ist hier zur Differenzialdiagnose hilfreich . Die Sarkoidose kann jede Struktur von Auge und Augenanhangsgebilden wie Orbita, Lider, Tränendrüse und -wege, Bindehaut und das Augeninnere betreffen. Der Anteil der okulären Beteiligung bei systemischer Sarkoidose wird in der Literatur mit 10–71 % recht variabel angegeben . Die häufigste visusrelevante Manifestation ist eine Entzündung der Uvea, der mittleren Augenhaut. Die Uveitis wird nach dem primären Fokus der Entzündung in Uveitis anterior (Iris, Ziliarkörper), intermedia (Vitreus, Pars plana), posterior (Choroidea, Retina) und Panuveitis (alle Bereiche betroffen) unterteilt. Uveitis ist also ein Überbegriff für klinisch und prognostisch unterschiedliche Krankheitsbilder. Eine Uveitis wird bei 20–30 % aller Patienten mit Sarkoidose beschrieben. Häufig ist die Uveitis dabei die klinische Erstmanifestation einer systemischen Sarkoidose (30–79 %) . Bei mehr als einem Drittel der Sarkoidose-assoziierten Uveitisfälle bleibt die Uveitis die einzige Manifestation einer Sarkoidose. Die Diagnose einer okulären Sarkoidose kann insbesondere in Abwesenheit systemischer Zeichen schwierig sein. Eine Biopsie von intraokulärem Gewebe wird aufgrund des Komplikationsrisikos nur selten in dieser Indikation durchgeführt. Diagnostische Kriterien wurden vom IWOS (International Workshop on Ocular Sarcoidosis) erstellt . Diese umfassen 7 typische klinische intraokuläre Zeichen (z. B. Irisknötchen, noduläre oder segmentale Periphlebitis) sowie diverse systemische Untersuchungen, die zum Verdacht einer okulären Sarkoidose führen sollen. Die Symptome einer Uveitis bei Sarkoidose sind allesamt unspezifisch und je nach Uveitisform, Lokalisation und zeitlichem Verlauf unterschiedlich. Photophobie, Skotome, Schleiersehen sowie Schmerzen und Rötungen sind häufigere Symptome. Die Therapie der Sarkoidose-assoziierten Uveitis ist abhängig von Schweregrad und Uveitisform. Kortikosteroide sind der Hauptpfeiler der Behandlung. Da topisches Kortison nur im vorderen Augensegment wirkt, ist in der Regel eine systemische Therapie angezeigt. Lokale Therapiealternativen sind periokuläre und intravitreale Kortisoninjektionen, deren Nutzen/Risiko im Einzelfall abzuwägen ist. Bei chronischem Verlauf sowie häufigen Rezidiven besteht die Indikation für eine systemische immunsuppressive Therapie (Antimetabolite, Calcineurinantagonisten) oder Biologika, wobei eine Zulassung für die Behandlung einer Uveitis derzeit nur für Adalimumab besteht . Bei Verdacht und zum Ausschluss einer aktiven Entzündung genügen augenärztliche Standarduntersuchungen. Eine zeitnahe Überweisung zum Augenfacharzt sollte bei unklaren akuten oder rezidivierenden Augenrötungen und -schmerzen erfolgen oder bei unklaren Sehstörungen. Die Betreuung an einem Spezialzentrum für Uveitis ist angezeigt bei schweren Uveitisfällen und chronischen oder häufig rezidivierenden Verläufen. Anhand der aktuellen ESC/ERS pulmonale Hypertonie(PH)-Leitlinien wird eine PH durch den Anstieg des pulmonalarteriellen Mitteldrucks (mPAP) > 20 mm Hg definiert . Wenn eine PH bei einer Sarkoidose auftritt, wird diese laut der aktuellen klinischen Klassifikation in die Gruppe 5 (PH mit unklaren und/oder multifaktoriellen Mechanismen) eingeteilt. Epidemiologische Studien berichten, dass 6–20 % der Patienten mit Sarkoidose eine PH entwickeln . Die Ursache der pulmonalen Druckerhöhung ist multifaktoriell, wobei die Bildung von Granulomen in den Lungengefäßen, entzündliche Veränderungen, chronische Thromboembolien, fibrosierende Prozesse in der Lunge und im Mediastinum sowie die Kompression der Lungengefäße durch Lymphknoten eine Rolle spielen können. Das häufigste Symptom einer PH bei Sarkoidose ist eine Belastungsdyspnoe, welche nicht mit der Grunderkrankung selbst erklärt werden kann. Die wichtigste nichtinvasive Untersuchungsmethode ist die Echokardiographie, welche direkte und indirekte Hinweise für eine pulmonale Druckerhöhung liefert. Die Diagnose wird mit einer Rechtsherzkatheteruntersuchung bestätigt, welche die Messung des mPAP und die Berechnung des pulmonalen Gefäßwiderstandes erlaubt . Es liegen derzeit keine zugelassenen Medikamente zur spezifischen Therapie der PH bei Sarkoidose vor. Kleinere Studien zeigten eine hämodynamische bzw. klinische Besserung auf zugelassene PH-Medikamente, diese Ergebnisse sind aber nicht in größeren Untersuchungen validiert . Bei entzündlich aktiver Erkrankung wurde auch über die Effektivität einer Behandlung mit Kortikosteroiden bzw. immunsuppressiven Medikamenten berichtet. Die Prognose einer schweren PH bei Sarkoidose ist schlecht, deswegen soll in ausgewählten Fällen rechtzeitig eine Lungentransplantation in Erwägung gezogen werden . Eine kardiale Beteiligung tritt bei 25 % aller Patienten mit Sarkoidose auf, allerdings ist sie nur in 2–7 % klinisch relevant . In diesen Fällen stellt die kardiale Beteiligung jedoch einen schwerwiegenden Befund dar, da 13–25 % aller Sarkoidose-bedingten Todesfälle auf die kardiale Mitbeteiligung zurückgeführt werden . Ungefähr bei einem Drittel der Patienten mit kardialer Sarkoidose besteht eine isolierte kardiale Beteiligung. Die kardiale Sarkoidose ist pathophysiologisch durch eine inflammatorische, potenziell reversible granulomatöse Infiltration, gefolgt von einer irreversiblen Fibrosierung charakterisiert. Der direkte histologische Nachweis mittels Endomyokardbiopsie gelingt aufgrund der fokalen Infiltration nur in ca. 25 % der Fälle („sampling error“) . Je nach Lokalisation und Ausmaß der Infiltrate treten Erregungsleitungsstörungen, ventrikuläre Arrhythmien und eine abnehmende systolische Funktion auf. Besonders bei Patienten im Alter unter 60 Jahre mit hochgradigem AV-Block, ventrikulären Arrhythmien bis hin zum plötzlichen Herztod oder nichtischämischer Herzinsuffizienz ist die kardiale Sarkoidose daher eine wichtige Differenzialdiagnose. „Red flags“ sind neben Palpitationen, Synkopen und Zeichen bzw. Symptomen der Herzinsuffizienz regionale, nicht durch KHK erklärbare Wandbewegungsstörungen mit narbiger Wandverdünnung im Herzultraschall (v. a. basal inferiores Septum), des Weiteren treten häufig Erregungsleitungsstörungen wie Schenkel- oder AV-Block im EKG auf . Bei gesicherter extrakardialer Sarkoidose ist dementsprechend eine Anamnese hinsichtlich kardialer Symptome empfohlen, ebenso wie die Durchführung eines Routine-EKG. Eine Echokardiographie kann weiterführend erwogen werden. Die oben genannten „red flags“ sind indikativ für das Vorliegen einer kardialen Beteiligung und sollten ein Screening auf kardiale Beteiligung nach sich ziehen . Bei einer isolierten kardialen Sarkoidose ist gemäß internationalen Richtlinien ein bioptischer Nachweis gefordert, welcher aufgrund der niedrigen Sensitivität der Endomyokardbiopsie jedoch häufig nicht gelingt . Diagnostisch wegweisend ist der Nachweis von fokaler Inflammation bzw. fibrotischer Areale in der kardialen MRT oder im Fasting-FDG-PET. Beim Fasting-PET wird ein spezifisches Patientenvorbereitungsprotokoll vor der Untersuchung eingehalten (12–24 h fettreiche Diät ohne Kohlenhydrate, 12–18 h Fasten und ggf. zusätzlich noch i.v. Heparin 15 min vor der Untersuchung) . In der klinischen Realität wird daher häufig rein auf der Grundlage der nichtinvasiven Bildgebung im Kontext mit typischer Anamnese, Klinik und Zeichen einer kardialen Sarkoidose eine immunsuppressive Therapie eingeleitet, auch wenn ein bioptischer Sarkoidosebeweis nicht gelingt . Bei bioptisch gesicherter extrakardialer Sarkoidose ist die kardiale Bildgebung ausreichend für den Nachweis einer kardialen Mitbeteiligung . Eine immunsuppressive Therapie ist indiziert bei Vorliegen eines AV-Blocks, ventrikulären Arrhythmien oder Herzinsuffizienz. Des Weiteren bestehen spezifische Indikationskriterien für die primärprophylaktische Implantation eines implantierbaren Kardioverter-Defibrillators (ICD) . Die exakte Prävalenz der renalen Beteiligung der Sarkoidose liegt nach Schätzungen bei 10–30 % . Dabei wird grundlegend unterschieden zwischen: der renalen Sarkoidosebeteiligung im Sinne einer granulomatösen interstitiellen Nephritis, der durch Hyperkalzämie/-kalzurie verursachten Nephrokalzinose mit oder ohne Nephrolithiasis. Hinsichtlich des Verlaufs treten sowohl akute Nierenfunktionseinschränkungen (AKI) als auch chronische Formen (CKD) auf. Diagnostisch sollten bei jedem Sarkoidosepatienten eine Bestimmung des Serumkreatinins sowie eine Berechnung der glomerulären Filtrationsrate (GFR) erfolgen, um die quantitative Einschränkung der Nierenfunktion einschätzen zu können . Des Weiteren sollten das Serumkalzium sowie – bei Auffälligkeiten in der Routinediagnostik – idealerweise das Parathormon und 1,25-Dihydroxy-Vitamin D 3 (Calcitriol) bestimmt werden. Differenzialdiagnostisch kommen zahlreiche andere, häufigere Ursachen für ein AKI bzw. eine CKD infrage, die renale Sarkoidosebeteiligung ist daher meist eine Ausschlussdiagnose. Bildgebend sollte eine Sonographie der Nieren und ableitenden Harnwege bei jedem Verdacht auf Nierenbeteiligung durchgeführt werden . Die Nephrokalzinose ist die häufigste renale Komplikation einer Sarkoidose und Ausdruck einer sekundären Schädigung durch eine Hyperkalzurie und verläuft häufig asymptomatisch. Bei Vorliegen einer Nephrolithiasis oder Hyperkalzämie sollte unbedingt eine suffiziente medikamentöse Sarkoidosetherapie erfolgen, zusätzlich empfiehlt sich ein konventionelles CKD-Management . Stellt sich der Verdacht auf eine interstitielle Beteiligung im Sinne einer granulomatösen Nephritis (mittelgradig bis stark eingeschränkte Nierenfunktion mit AKI III° oder CKD G4–5, häufig diffus aktive Sarkoidose, unauffälliges Harnsediment), sollte an eine Nierenbiopsie zur Diagnosesicherung gedacht werden . Hier zeigen sich nicht-verkäsende interstitielle Granulome, welche bei fehlendem bzw. geringem Fibrosierungsgrad als therapeutische Konsequenz eine Kortikosteroidtherapie bedingen . In 5–10 % kommt es zum Auftreten neurologischer Symptome im Rahmen einer Sarkoidose . Darüber hinaus konnte bei 15–25 % eine asymptomatische nervale Beteiligung mittels histologischer Aufarbeitung in Autopsiestudien nachgewiesen werden . Eine isolierte Neurosarkoidose tritt in rund 10–20 % auf. Diese Fälle stellen eine besondere diagnostische Herausforderung dar, da die Abgrenzung zu anderen neurologischen Erkrankungen schwierig ist . Die Neurosarkoidose (NS) betrifft bevorzugt das zentrale Nervensystem inklusive Hirnnerven, kann jedoch seltener auch das periphere Nervensystem einschließlich der Muskulatur involvieren. Das Vorliegen einer aseptischen basalen Meningitis (in ca. 30 % der Fälle) sowie eine ein- oder beidseitige Hirnnervenaffektion (50–75 %) gelten als „typisches“ Bild einer Neurosarkoidose. Hierbei kommt es bevorzugt zu einer Beeinträchtigung des N. opticus (in ca. einem Drittel der Fälle) mit Auftreten einer Sehstörung oder einer peripheren Fazialisparese (in ca. einem Viertel der Fälle) . Seltener manifestiert sich eine NS mit Ausfällen anderer Hirnnerven, einer Myelopathie (in ca. einem Viertel der Fälle), Polyneuropathie (zumeist in Form einer Mononeuritis multiplex, axonal führenden sensomotorischen Polyneuropathie oder Small-fiber-Polyneuropathie) oder Myopathie (< 10 %). Ausgeprägte leptomeningeale und parenchymatöse Veränderungen mit Involvierung der Hypophysenachse, Enzephalopathien, Hydrozephalus oder epileptischen Anfällen sind beschrieben . Bei Verdacht auf eine neurologische Beteiligung sind zusätzlich zur allgemeinen Sarkoidoseabklärung eine Bildgebung mittels MRT und eine Lumbalpunktion indiziert. Eine pathognomonische Befundkonstellation der Bildgebung und des Liquors existiert allerdings nicht. Die MRT sollte mit Kontrastmittel (KM) durchgeführt werden und liefert in bis zu 80 % der Fälle mit NS einen auffälligen Befund wie eine leptomeningeale Anreicherung (65 %) oder basale und periventrikuläre Hyperintensitäten in T2-gewichteten Bildern (46 %) . Der Liquor weist zumeist eine lymphozytäre Pleozytose mit fehlender intrathekaler Immunglobulinbildung auf, dennoch ist der Nachweis transienter oligoklonaler Banden möglich. Die zusätzliche Bestimmung von ACE, Beta-2-Mikroglobulin, Lysozym, löslichem IL-2-Rezeptor und Neopterin im Liquor können nützlich sein . Neben der neurologischen Abklärung sollte eine intensive systemische Abklärung inklusive Ausschluss einer Tuberkulose erfolgen, auch um mögliche Biopsiestellen zu identifizieren, um nach Möglichkeit auf eine Nerven- oder Hirnbiopsie verzichten zu können. Ist das nicht möglich, muss eine solche aber dennoch in unklaren Fällen bei mutmaßlich isolierter NS aufgrund der Konsequenz einer langfristigen Immunsuppression ernsthaft erwogen werden. Gesonderte wissenschaftlich abgesicherte Therapieempfehlungen bezüglich NS existieren nicht. Glukokortikoide werden analog zur pulmonalen Sarkoidose als First-line-Therapie eingesetzt. Je nach Schweregrad der Beschwerden wird zudem zuvor oft ein Kortisonstoß (z. B. 1 g Methylprednisolon pro Tag für 3 bis 5 Tage) verabreicht. Bei unzureichendem Therapieansprechen oder Rezidiv wird bevorzugt der zusätzliche Einsatz von Methotrexat empfohlen. Alternativen bei Therapieversagen sind Azathioprin, Hydroxychloroquin, Mycophenolat oder Infliximab . Bei Vorliegen einer krankheitsassoziierten (Small-fibre‑)Polyneuropathie kann unter etablierter immunmodulatorischer/-suppressiver Therapie eine Stabilisierung oder Besserung eintreten. Zur Behandlung persistierender neuropathischer Schmerzen bestehen keine gesonderten Empfehlungen, es sollen analog zur Behandlung anderer neuropathischer Schmerzen Antikonvulsiva, Antidepressiva, Opiate und Lokalanästhetika angewandt werden . Eine klinisch manifeste Beteiligung des Gastrointestinaltrakts bei Sarkoidose ist äußerst selten (< 1 %), wohingegen eine symptomatische Beteiligung der Leber mit 5–20 % häufiger zu beobachten ist. Die Rate an subklinischen Beteiligungen der Organe aus dem Gastrointestinaltrakt ist wesentlich höher. Gastrointestinale Sarkoidose Eine gastrointestinale Beteiligung im Rahmen einer Sarkoidose kann vom Mund bis zum Rektum auftreten . Der obere Gastrointestinaltrakt (insbesondere der Magen) scheint hier häufiger betroffen zu sein als der untere . Im Mund kann sich die Sarkoidose als Knoten oder Ulzerationen im Bereich der bukkalen Schleimhaut oder des Zahnfleisches als Gingivahyperplasie bzw. Gingivitis manifestieren. Ein Mitbeteiligung der Ösophagusschleimhaut und/oder -muskulatur betrifft meist das untere Ösophagusdrittel und kann zur Dys- und/oder Odynophagie bis hin zu einem Achalasie-ähnlichen Bild führen . Die endoskopisch sichtbaren Veränderungen sind meist unspezifisch (plaqueartige oder knotig) und bedürfen einer bioptischen Abklärung . Die Magenbeteiligung ist durch Vorliegen einer granulomatösen Gastritis charakterisiert. Wichtige Differenzialdiagnosen sind der Morbus Crohn oder andere Formen der chronischen Gastritis, da diese ebenfalls zu Granulomen in der Magenschleimhaut führen können. Die Magenbeteiligung ist meist ein Zufallsbefund: Dyspeptische Beschwerden oder Komplikationen wie eine gastrointestinale Blutung oder Obstruktion sind selten . Patienten mit bekannter Sarkoidose, die entsprechende Symptome aufweisen, sollten endoskopisch abgeklärt werden . Den dyspeptischen Symptomen kann – obwohl kontrollierte Studien hierzu fehlen – mit Protonenpumpenhemmern entgegengewirkt werden . Eine Beteiligung von Dünndarm, Kolon, Rektum und Pankreas ist äußerst selten. Leberbeteiligung Eine Leberbeteiligung bei Sarkoidose kommt häufig vor (50–90 %), ist aber nur in 5–15 % symptomatisch . Das klinische Spektrum kann von einer asymptomatischen Erhöhung der Leberenzyme (vorrangig cholestatisches Muster mit Erhöhung der alkalischen Phosphatase und Gamma-Glutamyl-Transferase) bis hin zu Hepatomegalie, Schmerzen im rechten Oberbauch und Juckreiz reichen. Das morphologische Erscheinungsbild der hepatalen Sarkoidose kann dem einer primär sklerosierenden Cholangitis (intrahepatische biliäre Strikturen) oder einer primär biliären Cholangitis ähneln , aber auch intrahepatische Raumforderungen (meist in Nahbeziehung zu Pfortaderästen, in der Regel bis maximal 3 cm) können auftreten . Zur Entwicklung einer biliären Fibrose oder Zirrhose mit portaler Hypertension kommt es nur selten (bei 6–8 %) . Die Diagnosestellung erfolgt auch hier histologisch und erfordert den Ausschluss anderer Ursachen von Granulomen in der Leber (z. B. TBC, granulomatöse Hepatitis im Rahmen einer „drug-induced liver injury“, primär biliäre Cholangitis, andere Autoimmunerkrankungen wie Vaskulitiden …) . Neben der Standardtherapie der Sarkoidose kann im Falle einer intrahepatischen Cholestase eine empirische Therapie mit Ursodeoxycholsäure 13–15(–20) mg/kg zur Verbesserung der Cholestaseparameter führen . Eine Beteiligung der Milz bei Sarkoidose ist häufig und meist asymptomatisch. Bei bis zu 80 % der Patienten liegt eine Splenomegalie vor, auch Rundherde in der Milz können in der Bildgebung detektiert werden . Eine gastrointestinale Beteiligung im Rahmen einer Sarkoidose kann vom Mund bis zum Rektum auftreten . Der obere Gastrointestinaltrakt (insbesondere der Magen) scheint hier häufiger betroffen zu sein als der untere . Im Mund kann sich die Sarkoidose als Knoten oder Ulzerationen im Bereich der bukkalen Schleimhaut oder des Zahnfleisches als Gingivahyperplasie bzw. Gingivitis manifestieren. Ein Mitbeteiligung der Ösophagusschleimhaut und/oder -muskulatur betrifft meist das untere Ösophagusdrittel und kann zur Dys- und/oder Odynophagie bis hin zu einem Achalasie-ähnlichen Bild führen . Die endoskopisch sichtbaren Veränderungen sind meist unspezifisch (plaqueartige oder knotig) und bedürfen einer bioptischen Abklärung . Die Magenbeteiligung ist durch Vorliegen einer granulomatösen Gastritis charakterisiert. Wichtige Differenzialdiagnosen sind der Morbus Crohn oder andere Formen der chronischen Gastritis, da diese ebenfalls zu Granulomen in der Magenschleimhaut führen können. Die Magenbeteiligung ist meist ein Zufallsbefund: Dyspeptische Beschwerden oder Komplikationen wie eine gastrointestinale Blutung oder Obstruktion sind selten . Patienten mit bekannter Sarkoidose, die entsprechende Symptome aufweisen, sollten endoskopisch abgeklärt werden . Den dyspeptischen Symptomen kann – obwohl kontrollierte Studien hierzu fehlen – mit Protonenpumpenhemmern entgegengewirkt werden . Eine Beteiligung von Dünndarm, Kolon, Rektum und Pankreas ist äußerst selten. Eine Leberbeteiligung bei Sarkoidose kommt häufig vor (50–90 %), ist aber nur in 5–15 % symptomatisch . Das klinische Spektrum kann von einer asymptomatischen Erhöhung der Leberenzyme (vorrangig cholestatisches Muster mit Erhöhung der alkalischen Phosphatase und Gamma-Glutamyl-Transferase) bis hin zu Hepatomegalie, Schmerzen im rechten Oberbauch und Juckreiz reichen. Das morphologische Erscheinungsbild der hepatalen Sarkoidose kann dem einer primär sklerosierenden Cholangitis (intrahepatische biliäre Strikturen) oder einer primär biliären Cholangitis ähneln , aber auch intrahepatische Raumforderungen (meist in Nahbeziehung zu Pfortaderästen, in der Regel bis maximal 3 cm) können auftreten . Zur Entwicklung einer biliären Fibrose oder Zirrhose mit portaler Hypertension kommt es nur selten (bei 6–8 %) . Die Diagnosestellung erfolgt auch hier histologisch und erfordert den Ausschluss anderer Ursachen von Granulomen in der Leber (z. B. TBC, granulomatöse Hepatitis im Rahmen einer „drug-induced liver injury“, primär biliäre Cholangitis, andere Autoimmunerkrankungen wie Vaskulitiden …) . Neben der Standardtherapie der Sarkoidose kann im Falle einer intrahepatischen Cholestase eine empirische Therapie mit Ursodeoxycholsäure 13–15(–20) mg/kg zur Verbesserung der Cholestaseparameter führen . Eine Beteiligung der Milz bei Sarkoidose ist häufig und meist asymptomatisch. Bei bis zu 80 % der Patienten liegt eine Splenomegalie vor, auch Rundherde in der Milz können in der Bildgebung detektiert werden . Eine multimodale interprofessionelle Rehabilitation wird je nach Leitsymptom/-organ (Lunge/Herz/Neuro/Bewegungs- und Stützapparat) über die Sozialversicherungsträger ambulant bzw. stationär beantragt und hat nachgewiesene positive Effekte bei Patienten mit Sarkoidose. So kann die Leistungsfähigkeit, bemessen am 6‑Minuten-Gehtest, verbessert werden , ähnliche Daten gibt es auch zu Kraftzuwächsen der unteren Extremität. Das Leitsymptom Fatigue, welches mit eingeschränkter Lebensqualität, kognitiven Defiziten bis hin zu Depressionen einhergehen kann, konnte ebenfalls signifikant verbessert werden. Eine prospektive deutsche Multicenterstudie und eine Studie zu Langzeiteffekten der Reha konnten diese Daten bestätigen – hier wurden zusätzlich Daten zur Lebensqualität erhoben, die ebenso positiv waren. Für die Sozialversicherungsträger waren positive Aspekte für die Arbeitsfähigkeit darstellbar, was entscheidend ist, da Sarkoidosepatienten üblicherweise im erwerbsfähigen Alter erstdiagnostiziert werden. Auch eine Zuweisung zu einer Rehabilitation mit rheumatologischem Schwerpunkt sollte in entsprechend gearteten Fällen erwogen werden. Aufgrund der Komplexität der Erkrankung ist bei Sarkoidose eine sehr genaue prä-/rehabilitative Diagnostik notwendig und hat entsprechende Konsequenzen: Ein Schlafapnoescreening kann indiziert sein, ebenso kann eine belastungsinduzierte Hypoxämie bzw. eine Diffusionsstörung demaskiert werden, welche therapeutisch zu einem Intervalltraining führen sollte . Eine kardiale Mitbeteiligung bedingt eine Umstellung des Settings im Sinne einer rhythmologischen Überwachung, und eine pulmonale Hypertonie würde ebenso eine deutliche Änderung der medizinischen Trainingstherapie nach sich ziehen. Neurokognitive Veränderungen bedingen eine ergotherapeutische Intervention, darüber hinaus können eine psychologische Betreuung und sozialmedizinische Maßnahmen notwendig sein. Ebenso werden Schulungen im Sinne einer ausführlichen Erklärung der Erkrankung von Patienten als sehr hilfreich empfunden. Je nach Krankheitsschwere und initialem Manifestationsort ergeben sich unterschiedliche Latenzzeiten von den ersten Symptomen bis hin zur Diagnose und daher variable Patientenpfade. Bei der akuten Sarkoidose oder der Hautsarkoidose ergibt sich der Erkrankungsverdacht oft schon durch Blickdiagnose. Vielfach präsentieren Betroffene aber unspezifische Symptome, und es vergehen oft mehrere Jahre mit Konsultation vieler Ärzte bis zur richtigen Diagnose. In diesen Fällen fehlt auch oft ein „Leitfaden“ durch das Gesundheitssystem, da für Sarkoidose hierzulande eine standardisierte, strukturierte Versorgungsstruktur wie auch bei anderen seltenen und chronischen Erkrankungen fehlt. Auch kommt es in diesem Stadium aufgrund der oft wechselnden und unspezifischen Beschwerden zu Fehldiagnosen (wie Asthma, Burn-out, Depression) . Ist die Diagnose Sarkoidose etabliert, werden Patienten immer noch häufig mit „Mindermeinungen“ konfrontiert, z. B. dass Sarkoidose eine „gutartige“ Erkrankung sei, die meist von selber verschwinde, oder dass bei Sarkoidose immer die Lunge mitbetroffen sei und andere Beschwerden keinen relevanten Krankheitswert hätten. Besonders problematisch ist, dass Sarkoidose häufig in der Mitte des Lebens auftritt, zu einer Zeit in der soziales, familiäres und berufliches „Funktionieren“ besonders gefordert werden. In dieser Situation ist es umso belastender, krank oder leistungsunfähig zu sein, während z. B. Lungenfunktionstests oft normale Werte zeigen . Die Anwesenheit von Granulomen in Organen erklärt bei Sarkoidose nicht alle Symptome: Es gibt neben organspezifischen Auffälligkeiten, die durch Funktionstests und Labor überprüft werden können, eine Vielzahl allgemeiner Symptome wie Fatigue, Fieber, Anorexie, Arthralgie, Muskelschmerzen, kognitives Versagen und Schwäche. Obwohl sie speziell aus Patientensicht wesentliche Bestandteile des Krankheitsbildes der Sarkoidose sind, werden diese Symptome oft nicht als solche erkannt, beachtet oder dokumentiert . Fatigue ist hierbei wahrscheinlich das am meisten einschränkende Problem des täglichen Lebens vieler Sarkoidosepatienten , die damit einhergehende Leistungslimitierung wird von außen oft unterschätzt . Als Prioritäten von Patienten bei ihrer Behandlung gelten nicht Lungenfunktion oder Bildgebung, sondern Besserung/Erhalt ihrer Lebensqualität und Funktionieren im Alltag . Bei einem komplexeren Verlauf, wenn mehrere Organe betroffen sind, ist es empfehlenswert, die Patienten an einem interdisziplinären Zentrum zu betreuen, an dem die verschiedenen Fachrichtungen unter einem Dach vereint sind und an dem Expertise für seltene Krankheiten besteht. Idealerweise haben diese Zentren auch Möglichkeiten, die Patienten in ihrem Erkrankungsbild zu schulen . Ein interdisziplinäres Zentrum mit hoher Fallzahl ermöglicht das Erkennen und die anspruchsvolle Betreuung auch von seltenen Krankheitskonstellationen nach internationalen Standards . Daneben braucht es aber natürlich auch eine gute Anbindung an Hausarzt und betreuende niedergelassene Fachärzte als Basis der Versorgung und idealerweise als „Vertrauensärzte“ neben dezidierten Spezialisten mit naturgemäß begrenzten Ressourcen. Seit ihrer Erstbeschreibung im Jahr 1869 durch Jonathan Hutchinson ist die Sarkoidose Gegenstand medizinischer Forschung, jedoch verbleiben weiterhin offene Fragen bezüglich exakter Pathogenese, optimaler Therapie und prognostisch unterschiedlichen Verläufen . Genetische Varianten , mTOR Pathway , die Beteiligung von Zytokinen, T‑Helferzellen sowie Januskinasen sind nur einige Bereiche aktueller pathophysiologischer Forschung und Therapieentwicklung . Die klinische Variabilität der Sarkoidose ist hoch und führt oft zu unklaren Verläufen und unsicheren Prognosen . Eine Differenzierungsmöglichkeit bietet eine klinische Phänotypisierung nach führender Organbeteiligung (Auge-Haut-Herz-ZNS, muskuloskeletal-Haut, Lunge-Lymphknoten, gastrointestinal), was auch Implikationen für die Therapie hat . Die medikamentöse Behandlung, bestehend aus Steroiden und Immunsuppressiva, ist weithin akzeptiert und auch relativ standardisiert, jedoch häufig mit Nebenwirkungen verbunden . Es bestehen internationale Leitlinien und Positionspapiere zur Therapie , die optimalen Dosierungen, die Therapiedauer und die Möglichkeit von Kombinationstherapien sind jedoch noch nicht ausreichend evidenzbasiert geklärt. Gerade bei einer sich so variabel manifestierenden und häufig chronisch verlaufenden Erkrankung, die mit einer hohen und oft polytopen Beschwerdesymptomatik einhergehen kann, sind Patientenschulung, Aufklärung und „shared decision-making“ sowie multidisziplinäre Betreuung nach standardisierten Abläufen unerlässlich. Sowohl aus Sicht der Patienten als auch der Behandler gibt es jedenfalls für die Zukunft einen hohen Bedarf an Forschung in der Klinik und an der Basis sowie auch an Optimierung der bestehenden Abläufe in der Beziehung zwischen Patienten, betreuenden extramuralen Ärzten und Spezialisten in Zentren. Dieses Positionspapier soll als Versuch verstanden werden, diesen Prozess der Vernetzung und Weiterentwicklung anzustoßen und ein Stück weiterzuführen (Abb. ).
Ovarian Cancer Cells in Ascites Form Aggregates That Display a Hybrid Epithelial-Mesenchymal Phenotype and Allows Survival and Proliferation of Metastasizing Cells
7199f922-c072-40ea-bd17-9337586fb484
8775835
Anatomy[mh]
Epithelial ovarian cancer (EOC) is the deadliest gynecological malignancy in women, being diagnosed at advanced stages in most instances . Unlike most carcinomas, EOC very rarely spreads through the bloodstream and preferably disseminates into the peritoneum , giving rise to ascites, i.e., accumulation of abdominal fluid with floating cancer cells, and peritoneal nodules. Actually, ascites is present in more than one-third of ovarian cancer patients at diagnosis and in almost all cases of relapse . Dissemination into the peritoneum is mediated by shedding of tumor cells from the primary site and/or by reseeding of peritoneal metastases . Ovarian cancer cells break away from the primary or secondary tumor masses individually or as cell-cell aggregates . Multicellular aggregates in ascites, also called spheroids, are crucial for metastatic dissemination of ovarian cancer. First, spheroids are abundant in patients with advanced disease and correlation has been reported between resistance to platinum-based chemotherapy and percentage of spheroids in patients . Moreover, several model systems, based on experimental aggregates of established cell lines or purified cell populations have allowed showing that ovarian cancer cells forming 3D structures are protected from anoikis and chemotherapeutics, preserve cancer cell stemness, attach to and clear the mesothelial lining of the peritoneum, adhere to the underlining extra cellular matrix (ECM) and eventually form metastatic noduli (for reviews see ). Again using 3D models, these abilities have been linked to the expression of a variety of proteins, such as N-cadherin , autocrine-secreted fibronectin , integrins and an array of mesenchymal markers . The cellular composition of whole ascites varies across patients, in some instances being tumor cells up to less than 1% of the whole ascites volume . Izar et al. carried out single cell RNA sequencing of cells of ascites from EOC patients and showed that only approximately 7.9% expressed bona fide ovarian cancer-associated markers, such as EPCAM and CD24. Indeed, EOC ascites has been described as composed by malignant and non-malignant cells, the latter in turn identified as cancer-associated fibroblasts , mesothelial cells , macrophages and leukocytes . Given the relatively variable frequency of spheroids in diverse ascites, it was likely that cells composing spheroids were not picked up thoroughly by RNAseq protocols. Thus, we hypothesized that spheroids show a more specific phenotypic pattern possibly associated to their ability to form metastases. Therefore, we studied spheroids from EOC patients’ ascites using a minimal manipulation and surprisingly show here that they are all composed by almost only ovarian cancer cells; these cells display a full epithelial phenotype but also a partial mesenchymal transition consisting of the expression of alpha smooth muscle actin (αSMA) and fibronectin (FN1) secretion, critical for the formation and survival of 3D structures formed by ovarian cancer cells. 2.1. Collection of Samples Between 2019 and 2020, we collected 23 ascites samples of epithelial ovarian carcinoma EOC from 15 patients at paracentesis carried out as palliative care management. Samples were collected according to the “Profiling” Protocol approved by the local Ethical Committee (initially approved on 2 March 2011 n° 4/2011 by Comitato Etico Interaziendale AOU San Luigi and then on 14 October 2020 n° 296/2020, v.8.0, approved by the competent Ethical Committee of Istituto di Candiolo). Informed consent subscribed by enrolled patients is therefore compliant to the standards set by the Declaration of Helsinki. Only ten out of 23 samples, collected from six patients, were suitable for phenotyping as the others were highly hemorrhagic. Fresh abdominal effusion fluids were centrifugated at 1000 RPM; cell pellet was washed in PBS and immediately red blood cells were lysed with BD Pharm Lyse™ (BD Biosciences); then cells were washed once in basal culture media (RPMI 1640, Gibco™). Part of bulk population was immediately fixed in buffered formalin 10% (Sigma) embedded in Histogel™ (Thermo Scientific ® ), while the rest of the sample was allocated for in vitro experiments. 2.2. Cell Lines Commercially available ovarian cancer cell lines: (OVCAR-4, A2780, OVCAR-8, IGROV-1, COV 362, OV-90) and human fibroblast (BJ-5ta CRL-4001™) were obtained from American Type Culture Collection (ATCC). All cell lines have been characterized and maintained as suggested by the provider. 2.3. Immunofluorescence Ovarian cancer spheroids were characterized with immunofluorescence using 5 μm thick sections cut from formalin-fixed paraffin-embedded samples, mounted on slides and treated following standard procedures. Tissue sections were deparaffinized with absolute xylene and rehydrated with decreasing concentrations (100%, 70%, 50%) of ethyl alcohol. Antigen retrieval was performed by boiling the sections in citric acid, at pH 6, in microwave (750 W, twice for 5 min) and, after room temperature cooling, they were treated with Dako Protein Block/PBS 0.1% Tween-0.3% Triton-1% BSA for 30 min in a moisture chamber at room temperature. Direct antibodies were mixed in Dako Antibody Diluent with Background Reducing Components according to the manufacturer’s protocols and slides were incubated in a closed humid chamber at 4 °C overnight, with different conjugated primary antibodies. Subsequently, nuclei were stained with DAPI and slides were mounted with SI PREP, AQUA-MOUNT (Thermo Fisher Scientific, Waltham, MA, USA, cod.APTA-125-AM). To perform IF on adherent cells, cell lines were plated in a 6-well plate on glass coverslips coated with 0.1% gelatin from porcine skin (Sigma-Aldrich, Saint Louis, MO, USA, #G9136) and allowed to adhere overnight. Cells were washed in PBS, fixed in 4% paraformaldehyde (PFA), permeabilized with absolute methanol for 15 min at 4 °C and saturated with Dako Protein Block/PBS 0.1% Tween-0.3% Triton-1% BSA for 30 min, then incubated overnight with different conjugated primary antibodies as previously described. Nuclei were stained with DAPI and glass coverslips were mounted with SI PREP, AQUA-MOUNT (Thermo Fisher Scientific, cod.APTA-125-AM). When necessary, commercial ovarian cancer cell lines and ascites pellets were embedded in growth factor–reduced Matrigel (Tissue culture supplement, Corning ® Matrigel ® , 734-0272) and plated in μ-Slides 8 well imaging chambers (80826, Ibidi GmbH). A total of 5000–8000 cells were approximately dispersed in 15 μL droplet of Matrigel (final concentration of Matrigel 75%). Once the Matrigel was solidified, 200 μL of appropriate culture medium was added to each well. After 5 days cell culture medium was removed and Matrigel drops were washed in PBS, fixed in 4% PFA, permeabilized with absolute methanol, saturated and incubated with different conjugated primary antibodies as previously described. Nuclei were stained with DAPI and 200 μL of PBS was added to each well. When necessary, 3D spheroids were cultivated in presence of PND-1186 (VS-4718) for 5 days. Immunofluorescence was carried using the following conjugated primary antibodies selected from the CyCIF list (website https://www.cycif.org (accessed on 29 November 2021)): EpCAM (VU1D9) Mouse MAb (AlexaFluor ® 488 Conjugate, Cell Signalling Technology#5198); βactin (13E5) Rabbit MAb (AlexaFluor ® 555 Conjugate, Cell Signalling Technology, #8046); Ki-67 (D3B5) Rabbit MAb (Alexa Fluor 488 Conjugate, Cell Signalling Technology, #11882); anti-CD11b rat MAb [M1/70] (Alexa Fluor ® 488, abcam, ab197701); anti-CD163 rabbit MAb [EPR14643-36]-C-terminal (Alexa Fluor ® 647, abcam ab218294); anti-Fibronectin rabbit MAb [F1] (Alexa Fluor ® 488, abcam, ab198933); anti-PAX8 rabbit antibody [EPR18715] (Alexa Fluor ® 647, abcam, ab215953); anti-alpha smooth muscle actin rabbit MAb [EPR5368] (Alexa Fluor ® 555, abcam ab202509); CD45RB rabbit MAb Monoclonal Antibody (PD7/26), Alexa Fluor 488 (eBioscience, #53-9458-82); DAPI (Thermo Fisher Scientific, 62248). Unconjugated primary αSMA antibody was Clone 1A4 (Dako, #M0851). 2.4. RNA In Situ Hybridization and Protein Immunohistochemistry To evaluate Human ACTA2 expression in ovarian cancer cells, a co-detection assay was used on formalin-fixed paraffin-embedded (FFPE) samples, combined RNAScope 2.5 RED technology (Advanced Cell Diagnostics-a Bio-Techne brand ® ) on ACTA2 mRNA and immunohistochemistry of PAX8 protein. Five μm thick sections of formalin-fixed paraffin-embedded tissue samples, deparaffinized and dehydrated following standard procedures, were pretreated with the following consecutive incubations: (i) 10 min at room temperature with hydrogen peroxide; (ii) about 10 min at boiling temperature with target retrieval maintained in a steamer (in particular: 10′ for #5565 sample and 13′ for #5326 and #5819 samples); (iii) 10 min at 40 °C with protease in ACD HybEZ hybridization oven. To set the optimal condition, a wide range of temperature and time conditions in pretreatment steps with positive and negative probes control were explored. To ensure interpretable results, the assay was performed using in parallel positive and negative controls: the endogenous housekeeping gene was Hs-PPIB (ACD-Biotechne ref. 313901) used as positive control to assess both tissue RNA integrity and assay procedure. The bacterial gene dapB (ACD-Biotechne ref. 310043) was used as negative control to assess background signals. Pretreated samples were incubated for 2 h at 40 °C with appropriate probes (ACTA2 probe ref. 444771 was used concentrated 5×) and then the signal was amplified (1 h AMP5) and twice detected with fast red substrate. The samples were saturated with TBS-T-T+ 5% Normal Goat serum (Vector-D.B.A S-1000) for 1h at room temperature, and primary antibody against PAX8 (PAX-8 (MRQ-50), 760-4618 Roche ref. 760-4618) was incubated in a humid chamber overnight at +4 °C. Secondary antibody anti-mouse-HRP (Dako K-4001) was stained 1 h at room temperature, and the signal has been developed with DAB substrate (Dako K-3468) and counterstained with hematoxylin 50% in water (Sigma GHS1128). 2.5. Viability Assay CellTiter-Glo ® assay was used to evaluate the effect of PND-1186 (VS-4718, Selleckchem, #S7653) on ovarian cancer cells proliferation, according to the manufacturer’s protocol (Promega, Madison, WI, USA). Fold increase has been calculated for 0.5 μM and 1 μM of inhibitor concentration and plotted using a GraphPad Prism version 7.02 (San Diego, CA, USA). 2.6. Image Acquisition and Quantification Images of fixed cells were acquired with Leica TCS SPE II confocal microscope, 40× of magnification and were analyzed with NIH ImageJ (W. Rasband, NIH) software. To obtain the percentage of tumor cells per field, we divided the PAX8 positive area by the DAPI positive area and we statistically estimated the area occupied by a single nucleus. Since spheroids are very heterogeneous in shape and size, sometimes even within the same sample we decided to score αSMA giving 0 to 3 values, that indicate the amount of antibody presence into cells. If the score is 0 to 1+, the sample was slightly positive; if the score was 2+, the sample was considered positive; if the score was 3+ the cells were considered highly positives. Epifluorescent 4× and 40× images of 3D spheroids were collected using the inverted Ti2 Eclipse microscope of LIPSI (Nikon) and analyzed by NIS-Elements (Nikon) software. To estimate dimension distributions of aggregates formed by ovarian cancer cells and of the spheroid of representative ascite in presence of the PDN-1186 FAK inhibitor we used NIS-Elements software to calculate Feret diameter in 4× images of 2 different experiments. Between 2019 and 2020, we collected 23 ascites samples of epithelial ovarian carcinoma EOC from 15 patients at paracentesis carried out as palliative care management. Samples were collected according to the “Profiling” Protocol approved by the local Ethical Committee (initially approved on 2 March 2011 n° 4/2011 by Comitato Etico Interaziendale AOU San Luigi and then on 14 October 2020 n° 296/2020, v.8.0, approved by the competent Ethical Committee of Istituto di Candiolo). Informed consent subscribed by enrolled patients is therefore compliant to the standards set by the Declaration of Helsinki. Only ten out of 23 samples, collected from six patients, were suitable for phenotyping as the others were highly hemorrhagic. Fresh abdominal effusion fluids were centrifugated at 1000 RPM; cell pellet was washed in PBS and immediately red blood cells were lysed with BD Pharm Lyse™ (BD Biosciences); then cells were washed once in basal culture media (RPMI 1640, Gibco™). Part of bulk population was immediately fixed in buffered formalin 10% (Sigma) embedded in Histogel™ (Thermo Scientific ® ), while the rest of the sample was allocated for in vitro experiments. Commercially available ovarian cancer cell lines: (OVCAR-4, A2780, OVCAR-8, IGROV-1, COV 362, OV-90) and human fibroblast (BJ-5ta CRL-4001™) were obtained from American Type Culture Collection (ATCC). All cell lines have been characterized and maintained as suggested by the provider. Ovarian cancer spheroids were characterized with immunofluorescence using 5 μm thick sections cut from formalin-fixed paraffin-embedded samples, mounted on slides and treated following standard procedures. Tissue sections were deparaffinized with absolute xylene and rehydrated with decreasing concentrations (100%, 70%, 50%) of ethyl alcohol. Antigen retrieval was performed by boiling the sections in citric acid, at pH 6, in microwave (750 W, twice for 5 min) and, after room temperature cooling, they were treated with Dako Protein Block/PBS 0.1% Tween-0.3% Triton-1% BSA for 30 min in a moisture chamber at room temperature. Direct antibodies were mixed in Dako Antibody Diluent with Background Reducing Components according to the manufacturer’s protocols and slides were incubated in a closed humid chamber at 4 °C overnight, with different conjugated primary antibodies. Subsequently, nuclei were stained with DAPI and slides were mounted with SI PREP, AQUA-MOUNT (Thermo Fisher Scientific, Waltham, MA, USA, cod.APTA-125-AM). To perform IF on adherent cells, cell lines were plated in a 6-well plate on glass coverslips coated with 0.1% gelatin from porcine skin (Sigma-Aldrich, Saint Louis, MO, USA, #G9136) and allowed to adhere overnight. Cells were washed in PBS, fixed in 4% paraformaldehyde (PFA), permeabilized with absolute methanol for 15 min at 4 °C and saturated with Dako Protein Block/PBS 0.1% Tween-0.3% Triton-1% BSA for 30 min, then incubated overnight with different conjugated primary antibodies as previously described. Nuclei were stained with DAPI and glass coverslips were mounted with SI PREP, AQUA-MOUNT (Thermo Fisher Scientific, cod.APTA-125-AM). When necessary, commercial ovarian cancer cell lines and ascites pellets were embedded in growth factor–reduced Matrigel (Tissue culture supplement, Corning ® Matrigel ® , 734-0272) and plated in μ-Slides 8 well imaging chambers (80826, Ibidi GmbH). A total of 5000–8000 cells were approximately dispersed in 15 μL droplet of Matrigel (final concentration of Matrigel 75%). Once the Matrigel was solidified, 200 μL of appropriate culture medium was added to each well. After 5 days cell culture medium was removed and Matrigel drops were washed in PBS, fixed in 4% PFA, permeabilized with absolute methanol, saturated and incubated with different conjugated primary antibodies as previously described. Nuclei were stained with DAPI and 200 μL of PBS was added to each well. When necessary, 3D spheroids were cultivated in presence of PND-1186 (VS-4718) for 5 days. Immunofluorescence was carried using the following conjugated primary antibodies selected from the CyCIF list (website https://www.cycif.org (accessed on 29 November 2021)): EpCAM (VU1D9) Mouse MAb (AlexaFluor ® 488 Conjugate, Cell Signalling Technology#5198); βactin (13E5) Rabbit MAb (AlexaFluor ® 555 Conjugate, Cell Signalling Technology, #8046); Ki-67 (D3B5) Rabbit MAb (Alexa Fluor 488 Conjugate, Cell Signalling Technology, #11882); anti-CD11b rat MAb [M1/70] (Alexa Fluor ® 488, abcam, ab197701); anti-CD163 rabbit MAb [EPR14643-36]-C-terminal (Alexa Fluor ® 647, abcam ab218294); anti-Fibronectin rabbit MAb [F1] (Alexa Fluor ® 488, abcam, ab198933); anti-PAX8 rabbit antibody [EPR18715] (Alexa Fluor ® 647, abcam, ab215953); anti-alpha smooth muscle actin rabbit MAb [EPR5368] (Alexa Fluor ® 555, abcam ab202509); CD45RB rabbit MAb Monoclonal Antibody (PD7/26), Alexa Fluor 488 (eBioscience, #53-9458-82); DAPI (Thermo Fisher Scientific, 62248). Unconjugated primary αSMA antibody was Clone 1A4 (Dako, #M0851). To evaluate Human ACTA2 expression in ovarian cancer cells, a co-detection assay was used on formalin-fixed paraffin-embedded (FFPE) samples, combined RNAScope 2.5 RED technology (Advanced Cell Diagnostics-a Bio-Techne brand ® ) on ACTA2 mRNA and immunohistochemistry of PAX8 protein. Five μm thick sections of formalin-fixed paraffin-embedded tissue samples, deparaffinized and dehydrated following standard procedures, were pretreated with the following consecutive incubations: (i) 10 min at room temperature with hydrogen peroxide; (ii) about 10 min at boiling temperature with target retrieval maintained in a steamer (in particular: 10′ for #5565 sample and 13′ for #5326 and #5819 samples); (iii) 10 min at 40 °C with protease in ACD HybEZ hybridization oven. To set the optimal condition, a wide range of temperature and time conditions in pretreatment steps with positive and negative probes control were explored. To ensure interpretable results, the assay was performed using in parallel positive and negative controls: the endogenous housekeeping gene was Hs-PPIB (ACD-Biotechne ref. 313901) used as positive control to assess both tissue RNA integrity and assay procedure. The bacterial gene dapB (ACD-Biotechne ref. 310043) was used as negative control to assess background signals. Pretreated samples were incubated for 2 h at 40 °C with appropriate probes (ACTA2 probe ref. 444771 was used concentrated 5×) and then the signal was amplified (1 h AMP5) and twice detected with fast red substrate. The samples were saturated with TBS-T-T+ 5% Normal Goat serum (Vector-D.B.A S-1000) for 1h at room temperature, and primary antibody against PAX8 (PAX-8 (MRQ-50), 760-4618 Roche ref. 760-4618) was incubated in a humid chamber overnight at +4 °C. Secondary antibody anti-mouse-HRP (Dako K-4001) was stained 1 h at room temperature, and the signal has been developed with DAB substrate (Dako K-3468) and counterstained with hematoxylin 50% in water (Sigma GHS1128). CellTiter-Glo ® assay was used to evaluate the effect of PND-1186 (VS-4718, Selleckchem, #S7653) on ovarian cancer cells proliferation, according to the manufacturer’s protocol (Promega, Madison, WI, USA). Fold increase has been calculated for 0.5 μM and 1 μM of inhibitor concentration and plotted using a GraphPad Prism version 7.02 (San Diego, CA, USA). Images of fixed cells were acquired with Leica TCS SPE II confocal microscope, 40× of magnification and were analyzed with NIH ImageJ (W. Rasband, NIH) software. To obtain the percentage of tumor cells per field, we divided the PAX8 positive area by the DAPI positive area and we statistically estimated the area occupied by a single nucleus. Since spheroids are very heterogeneous in shape and size, sometimes even within the same sample we decided to score αSMA giving 0 to 3 values, that indicate the amount of antibody presence into cells. If the score is 0 to 1+, the sample was slightly positive; if the score was 2+, the sample was considered positive; if the score was 3+ the cells were considered highly positives. Epifluorescent 4× and 40× images of 3D spheroids were collected using the inverted Ti2 Eclipse microscope of LIPSI (Nikon) and analyzed by NIS-Elements (Nikon) software. To estimate dimension distributions of aggregates formed by ovarian cancer cells and of the spheroid of representative ascite in presence of the PDN-1186 FAK inhibitor we used NIS-Elements software to calculate Feret diameter in 4× images of 2 different experiments. 3.1. Spontaneous Spheroids of EOC Patients’ Ascites Show Both Epithelial and Partial Mesenchymal Phenotype We studied in depth ten samples of EOC ascites from six patients. Indeed, longitudinal samples of ascites were obtained from successive paracenteses of patients at relapse in palliative care setting. Three out of these six patients were not previously treated with chemotherapy. Patients’ characteristics are reported in . Manipulation of the samples was minimal to avoid selection of cell subpopulations. After collecting cells from whole ascites and brief washing with gentle centrifugations, the bulk cell population was immediately fixed in buffered formalin, embedded in Histogel™ and then in paraffin. shows representative H&E-stained sections of ascites samples. As already known , spheroids in ascites vary in frequency, size and structure. Immunofluorescence (IF) was used to characterize cell populations composing ascitic spheroids. Regardless of the shape and size, spheroids came out to be totally constituted by PAX8 positive cells, i.e., cells with nuclear localization of the marker of HGS-EOC. Representative images are shown in . PAX8 is a paired-box gene important in embryogenesis of the thyroid, Müllerian, and renal/upper urinary tracts, found increased in a number of carcinomas derived from these tissues and namely in approx. 100% of HGS-EOCs (see e.g., ). Conversely, both mesothelial cells and mesothelioma show PAX8 negative expression , so that PAX8 staining reliably distinguishes ovarian serous tumors from malignant mesotheliomas . More importantly, PAX8 is a key gene in HGS-EOC, as in experimental settings it drives the transformation of the fallopian tube epithelium, from which HGS-EOCs derive . We did not extensively use in this study WT1 previously described as HGS-EOC as marker, as it is also expressed by mesothelial cells . Quantification of PAX8 positive cells compared to all cells stained with DAPI showed that in the whole ascites the percentage of HGS-EOC cells varied from less than 2% to 70% in the sample with the highest cellularity ( ), in line with data reported by other Authors (for a review see ). Lack of intra patient variability in our series was shown not only by the analyses of several spheroids from a single ascites sample (see e.g., ), but also by the analyses of multiple longitudinal and independently obtained samples derived from the palliative care of two patients ( ). To further confirm that ovarian cancer cells are the main component of spheroids, EpCAM (Epithelial Cell Adhesion Molecule) was utilized as an additional marker. Although EPCAM is reported as a membrane protein expressed by any epithelial cell type, its overexpression in carcinomas and in particular by ovarian carcinoma cells is used to distinguish the latter cells from mesothelial and non-epithelial cells in human ascites . As expected, EpCAM antibodies labelled the cell surface of PAX8+ cells in spheroids ( B,C) with a polarized distribution. Then, Ki67 antibody was used to assess whether spheroid cells were alive and EOC PAX+ cells were proliferating. As shown in A a number of PAX8+ cells in spheroids were stained with Ki67 antibody. Moreover, spheroids could be cultivated up to 12 days. B shows the increased number of Ki67 positive cells after 5 and 8 days of cultivation. Surprisingly, section staining with αSMA antibody that is commonly used to identify fibroblasts (see control staining in ) marked a sub-membrane moiety of ovarian cancer cells, mainly at the inner surface of spheroids ( ), in most instances evidently overlaying the EpCAM staining ( A,B). The expression of αSMA was maintained not only for spheroids of the same patient derived from successive withdrawals ( B) but also after spheroid cultivation ( B). To rule out possible unspecific cross reactions with βactin, another αSMA-specific antibody, similarly generated against the NH2-terminal sequence of the protein was used and found to stain the same moieties in the cells ( ). Again, to rule out possible unspecific cross-reaction, βactin antibody was used and found to stain different cells and different subcellular structures in spheroids (see ). More importantly, RNAscope with a sequence specific probe confirmed that the bona fide epithelial ovarian cancer cells, stained with PAX8 MAb, expressed αSMA RNA encoded by the ACTA2 gene ( ). Only in ascites of two patients and in these two cases only in a few large spheroids bona-fide fibroblasts were likely detectable, i.e., spindle-shaped cells stained with αSMA antibody and with a central PAX8 negative nucleus ( C). Given the widespread staining with αSMA MAb but also with EpCAM MAb and their polarized localization in spheroids, we hypothesize that ovarian cancer cells in spheroids undergo the so-called “partial” epithelial-to-mesenchymal plasticity (for a review see ). This was confirmed by the detection of fibronectin (FN1) in the core of a number of spheroids ( ). It is known that soluble FN1 is detectable in ascitic fluid and also that endogenous FN1 is secreted by ovarian cancer cells . Thus, it was not surprising to detect FN1 staining outside and inside cells ( ). In a thorough single-cell analysis of ascites it had been shown that macrophages are a fundamental component of the ascites ecosystem . We used anti CD11b and anti CD163 MAbs to evaluate the presence of tumor-associated macrophages in ascites spheroids. As shown by , in some sections, rare CD163 positive cells were visible, located outside the spheroids, while we detect very rarely CD11 positive cells and only in a few sections, again outside the spheroids (not shown), suggesting that in the ascites M2 state macrophages were more numerous than those in the M1 state and mainly localized outside the spheroids. CD45 positive cells, i.e., cells of hematopoietic lineages, also were very rarely found in spheroids, but abundantly detectable in the bulk single cell population of ascites ( ). This was not surprising as CD45 positive cells are reported to constitute more than 50% of cells in most ascites samples . 3.2. SMA and Endogenous FN1 Are Necessary for the Formation of 3D Structures by Ovarian Cancer Cells To assess whether the mesenchymal-like phenotype is critical for the formation of 3D structures by ovarian cancer cells, we selected ovarian cancer cell lines with different expression levels of αSMA and FN1, based on the expression of the two specific mRNAs, as reported in the database of Cancer Cell Line Encyclopedia . Immunofluorescence showed that the proteins were differently expressed in agreement with the distribution of mRNAs. Among the six ovarian cancer lines shown in A, the OVCAR4 and A2780 cells displayed the highest-level expression of αSMA and the lowest of FN1, while the COV362, OVCAR8 and OV90 cells showed the opposite pattern of protein expression, being αSMA barely detectable in the latter cell lines. When the above-listed cell lines were allowed to form organoid-like 3D structures in Matrigel, submembrane expression of αSMA was found in all cell lines ( B). Using Ingenuity Pathway Analysis we ascertained that the proline-rich tyrosine kinase 2 (PTK2B/PYK2), which belongs to the family of FAK kinase, regulates the expression of both αSMA and FN1. This is in accordance with previous reports showing the up-regulation of αSMA by this kinase (see e.g., ). Less known is the role of FAK upstream fibronectin expression, its role downstream being well known. We cultivated for 5 days the organoid-like structures formed by cell lines in the presence of the highly specific PDN1186 (VS4718) FAK inhibitor (FAKi, ref. ). The inhibitor strongly affected the ability of the treated cells to aggregate in organoid-like structures as in the presence of FAKi only small 3D structures were visible ( ), without impairment of cell proliferation ( ). Interestingly, the Feret diameter of patient-derived ascites spheroids cultivated for 5 days in the presence of FAKi was also reduced ( ). We studied in depth ten samples of EOC ascites from six patients. Indeed, longitudinal samples of ascites were obtained from successive paracenteses of patients at relapse in palliative care setting. Three out of these six patients were not previously treated with chemotherapy. Patients’ characteristics are reported in . Manipulation of the samples was minimal to avoid selection of cell subpopulations. After collecting cells from whole ascites and brief washing with gentle centrifugations, the bulk cell population was immediately fixed in buffered formalin, embedded in Histogel™ and then in paraffin. shows representative H&E-stained sections of ascites samples. As already known , spheroids in ascites vary in frequency, size and structure. Immunofluorescence (IF) was used to characterize cell populations composing ascitic spheroids. Regardless of the shape and size, spheroids came out to be totally constituted by PAX8 positive cells, i.e., cells with nuclear localization of the marker of HGS-EOC. Representative images are shown in . PAX8 is a paired-box gene important in embryogenesis of the thyroid, Müllerian, and renal/upper urinary tracts, found increased in a number of carcinomas derived from these tissues and namely in approx. 100% of HGS-EOCs (see e.g., ). Conversely, both mesothelial cells and mesothelioma show PAX8 negative expression , so that PAX8 staining reliably distinguishes ovarian serous tumors from malignant mesotheliomas . More importantly, PAX8 is a key gene in HGS-EOC, as in experimental settings it drives the transformation of the fallopian tube epithelium, from which HGS-EOCs derive . We did not extensively use in this study WT1 previously described as HGS-EOC as marker, as it is also expressed by mesothelial cells . Quantification of PAX8 positive cells compared to all cells stained with DAPI showed that in the whole ascites the percentage of HGS-EOC cells varied from less than 2% to 70% in the sample with the highest cellularity ( ), in line with data reported by other Authors (for a review see ). Lack of intra patient variability in our series was shown not only by the analyses of several spheroids from a single ascites sample (see e.g., ), but also by the analyses of multiple longitudinal and independently obtained samples derived from the palliative care of two patients ( ). To further confirm that ovarian cancer cells are the main component of spheroids, EpCAM (Epithelial Cell Adhesion Molecule) was utilized as an additional marker. Although EPCAM is reported as a membrane protein expressed by any epithelial cell type, its overexpression in carcinomas and in particular by ovarian carcinoma cells is used to distinguish the latter cells from mesothelial and non-epithelial cells in human ascites . As expected, EpCAM antibodies labelled the cell surface of PAX8+ cells in spheroids ( B,C) with a polarized distribution. Then, Ki67 antibody was used to assess whether spheroid cells were alive and EOC PAX+ cells were proliferating. As shown in A a number of PAX8+ cells in spheroids were stained with Ki67 antibody. Moreover, spheroids could be cultivated up to 12 days. B shows the increased number of Ki67 positive cells after 5 and 8 days of cultivation. Surprisingly, section staining with αSMA antibody that is commonly used to identify fibroblasts (see control staining in ) marked a sub-membrane moiety of ovarian cancer cells, mainly at the inner surface of spheroids ( ), in most instances evidently overlaying the EpCAM staining ( A,B). The expression of αSMA was maintained not only for spheroids of the same patient derived from successive withdrawals ( B) but also after spheroid cultivation ( B). To rule out possible unspecific cross reactions with βactin, another αSMA-specific antibody, similarly generated against the NH2-terminal sequence of the protein was used and found to stain the same moieties in the cells ( ). Again, to rule out possible unspecific cross-reaction, βactin antibody was used and found to stain different cells and different subcellular structures in spheroids (see ). More importantly, RNAscope with a sequence specific probe confirmed that the bona fide epithelial ovarian cancer cells, stained with PAX8 MAb, expressed αSMA RNA encoded by the ACTA2 gene ( ). Only in ascites of two patients and in these two cases only in a few large spheroids bona-fide fibroblasts were likely detectable, i.e., spindle-shaped cells stained with αSMA antibody and with a central PAX8 negative nucleus ( C). Given the widespread staining with αSMA MAb but also with EpCAM MAb and their polarized localization in spheroids, we hypothesize that ovarian cancer cells in spheroids undergo the so-called “partial” epithelial-to-mesenchymal plasticity (for a review see ). This was confirmed by the detection of fibronectin (FN1) in the core of a number of spheroids ( ). It is known that soluble FN1 is detectable in ascitic fluid and also that endogenous FN1 is secreted by ovarian cancer cells . Thus, it was not surprising to detect FN1 staining outside and inside cells ( ). In a thorough single-cell analysis of ascites it had been shown that macrophages are a fundamental component of the ascites ecosystem . We used anti CD11b and anti CD163 MAbs to evaluate the presence of tumor-associated macrophages in ascites spheroids. As shown by , in some sections, rare CD163 positive cells were visible, located outside the spheroids, while we detect very rarely CD11 positive cells and only in a few sections, again outside the spheroids (not shown), suggesting that in the ascites M2 state macrophages were more numerous than those in the M1 state and mainly localized outside the spheroids. CD45 positive cells, i.e., cells of hematopoietic lineages, also were very rarely found in spheroids, but abundantly detectable in the bulk single cell population of ascites ( ). This was not surprising as CD45 positive cells are reported to constitute more than 50% of cells in most ascites samples . To assess whether the mesenchymal-like phenotype is critical for the formation of 3D structures by ovarian cancer cells, we selected ovarian cancer cell lines with different expression levels of αSMA and FN1, based on the expression of the two specific mRNAs, as reported in the database of Cancer Cell Line Encyclopedia . Immunofluorescence showed that the proteins were differently expressed in agreement with the distribution of mRNAs. Among the six ovarian cancer lines shown in A, the OVCAR4 and A2780 cells displayed the highest-level expression of αSMA and the lowest of FN1, while the COV362, OVCAR8 and OV90 cells showed the opposite pattern of protein expression, being αSMA barely detectable in the latter cell lines. When the above-listed cell lines were allowed to form organoid-like 3D structures in Matrigel, submembrane expression of αSMA was found in all cell lines ( B). Using Ingenuity Pathway Analysis we ascertained that the proline-rich tyrosine kinase 2 (PTK2B/PYK2), which belongs to the family of FAK kinase, regulates the expression of both αSMA and FN1. This is in accordance with previous reports showing the up-regulation of αSMA by this kinase (see e.g., ). Less known is the role of FAK upstream fibronectin expression, its role downstream being well known. We cultivated for 5 days the organoid-like structures formed by cell lines in the presence of the highly specific PDN1186 (VS4718) FAK inhibitor (FAKi, ref. ). The inhibitor strongly affected the ability of the treated cells to aggregate in organoid-like structures as in the presence of FAKi only small 3D structures were visible ( ), without impairment of cell proliferation ( ). Interestingly, the Feret diameter of patient-derived ascites spheroids cultivated for 5 days in the presence of FAKi was also reduced ( ). Advanced-stage EOC is characterized by early and massive metastatic dissemination in the peritoneum. Ascites, made of fluid and diverse cell types, is the vehicle of dissemination. Different experimental approaches have been used to identify the metastasis-prone cancer cells and assess the contribution of non-malignant cells, as both replenish ascites as single cells and aggregates. Malignant cells of aggregates, here named spheroids, in particular are considered the main source of EOC metastases, as experimental 3D structures have been shown to limit differentiation of EOC cells, i.e., the maintenance of the stem cell phenotype, and to protect cells from death due to loss of anchorage (anoikis) and from attack by chemotherapeutics (for review see ). However, the contribution of cancer cells to spheroids was poorly defined. Indeed, spheroids of patients’ ascites are usually described as heterogeneous structures, made of a small number of malignant cells and diverse non-malignant cell types (see e.g., [ , , , ]). In addition, the fraction of cancer cells of the whole ascites varies across patients and is reported to contribute from less than 1% to approximately 8% of the total cell population. Finally, the single-cell analyses of ascites showed the abundant presence in ascites of cancer-associated fibroblasts (CAFs) and tumor-associated macrophages (TAMs) . We show here that, minimizing manipulation, we have been able to determine that spheroids from EOC are all almost exclusively made of PAX8+, i.e., epithelial ovarian cancer cells. Rarely we found in spheroids spindle-shaped cells stained with αSMA antibodies, thus identifiable as bona fide CAFs, as reported by other Authors (see e.g., ). Similarly, we did not detect either CD45+ in spheroids, while we found several PAX8-/CD45+ cells outside spheroids in the bulk ascites cell preparations. We found a few TAMs and mainly with the M2 phenotype, i.e., cells stained with CD163 MAb, again outside the spheroids. TAMs have been shown to favor peritoneal metastases of ovarian cancer in mouse models . CAF combined with EOC cells in artificial heterotypic aggregation form pro-metastatic units . Altogether, data suggest that CAFs and TAMs rather form the ecosystem of ascites and might contribute to progression and metastases mainly via the secretion of cytokines, such as EGF, IL6 and TGFbeta (see e.g., ) and thus provide an environment supporting progression, chemoresistance and immune evasion . Additionally, we show here that almost all PAX8+ EOC cells in spheroids express αSMA and most spheroids contain FN1, which are markers usually associated to epithelial-mesenchymal transition (EMT). However, the same spheroid cells maintain the expression of epithelial markers, such as EpCAM, and cell polarization. Thus, all spheroid cells display a hybrid phenotype, also referred to as “partial” EMT or better as epithelial-mesenchymal plasticity (EMP) (for a review see ). Interestingly, the thorough scRNAseq analysis reported by Izar et al. showed that EPCAM+/CD24+ cells of the ascites of one out of seven patients analyzed express mesenchymal genes such as the ACTA2 gene encoding for αSMA. The EMP might be necessary for these spheroids to achieve their metastatic potential, as ovarian cancer cells implant into the peritoneum by breaching the mesothelial barrier and contacting the underlining ECM . In line, spheroids formed in vitro by colorectal cancer cells up-regulated expression of αSMA . Moreover, high expression of αSMA marks more malignant breast carcinoma . The hybrid partial mesenchymal phenotype described here in EOC spheroids is also in line with the notion that, in metastasizing carcinoma cells, a complete epithelial to mesenchymal transition is never observed, not only because the reversion of the EMT phenotype (mesenchymal to epithelial transition, MET) would be mandatory but never shown in metastatic site, but also because carcinoma cells always retain the epithelial phenotype (for a perspectives review see ). More importantly, while carcinoma cells in ascites, including those forming spheroids, do not show important genetic variation (see e.g., ), EMT itself is the best example of phenotypic plasticity that cancer cells acquire in several experimental conditions. Therefore, we believe that the morphological description of nonmanipulated spheroids shown here better reflect the actual phenotype of ovarian cancer cells in ascites. EMT has also been frequently associated to metastasis in ovarian cancer (for a review see ), but in this cancer, too, the clinical significance of EMT remains controversial, although even in ovarian cancer the association between the cancer stem cell phenotype and EMT has been shown . Data shown here, however, confirm the hypothesis that ovarian carcinoma cells are prone to plasticity. The striking expression of αSMA was unexpected, as only anecdotal reports have described before this expression in HGS-EOC . While we show here that α-SMA is necessary for 3D structure formation, we can only speculate on their possible physiological role in the context of ovarian cancer cell spheroids based on the necessity of 3D organization for survival, proliferation and metastasis. The experimental model, i.e., aggregation of diverse EOC cell line in 3D structure, confirmed that αSMA is neo-expressed and thus is likely crucial for the compaction and survival of the ovarian cancer cells as 3D structures. On the other hand, FN1 might also constitute an extracellular substrate for ovarian cancer cell aggregation in spheroids in the absence of stromal fibroblasts. It had been previously demonstrated that soluble ECM proteins, such as FN1 and vitronectin are detectable in ascitic fluid and are organized by integrins for survival and proliferation (for a review see ). The role of endogenous FN1 in metastasis has been previously demonstrated in experimental models of ovarian cancers. Using in vitro reconstituted models, Kenny et al. and Iwanicki et al. showed that FN1, either secreted by mesothelial cells or by EOC cells themselves, is necessary to allow 3D structures formed by ovarian cancer cells to survive in the absence of anchorage and in an unfitting metabolic environment. Moreover, silencing of its receptor, integrin α 5 , and antibodies blocking FN1 and α 5 or β 1 integrin function prevented the formation of experimental metastases . Data demonstrate that multicellular aggregates found in ovarian cancer ascites are made almost exclusively by cancer cells, thus reinforcing the assumption that they are responsible for peritoneal metastasis of ovarian cancer. Data also show that aggregates display a hybrid phenotype that allows maintenance of the polarized 3D structures, necessary for survival and proliferation of cancer cells, but also the plasticity necessary for seeding into peritoneal lining.
Metabolomics, Genetics, and Environmental Factors: Intersecting Paths in Abdominal Aortic Aneurysm
644c9487-8bcb-4f79-8a3a-53e64fd8784d
11855682
Biochemistry[mh]
Abdominal aortic aneurysm (AAA), characterized by an aortic diameter exceeding 3 cm, constitutes a substantial global healthcare challenge . The risk of AAA escalates notably after 60 years old. Clinically relevant aneurysms, exceeding 4 cm in diameter, are present in approximately 1% of men aged 55 to 64, with prevalence escalating by 2% to 4% per subsequent decade . AAAs manifest four to six times more frequently in men than in women and are more prevalent in white individuals compared to Black individuals . AAA is characterized by localized structural deterioration of the wall of the aorta, resulting in progressive dilation and rupture . AAA pathogenesis is closely related to the progressive depletion and dysfunction of vascular smooth muscle cells and includes proteolysis, oxidative stress, inflammatory immune response, and apoptosis . These processes cause the loss of elasticity and resistance of the artery wall. Smoking and hypertension are significant risk factors for AAA. Additionally, AAAs are more prevalent in individuals with atherosclerosis, with an approximate 5% prevalence in those with coronary artery disease . Positive family history substantially increases the risk of AAA , suggesting that genetic factors play an important role in the development of AAA . AAA appears less common in individuals with diabetes . The core of AAA management requires longitudinal surveillance until the aneurysm reaches a size where the risk of rupture surpasses the risk of repair . Inflammatory processes and gut microbiota have important roles in the pathogenesis of AAA. A recent study based on 449,463 participants from the UK Biobank reported that exposure to long-term air pollutants increased the risk of AAA . Similarly, exposure to environmental toxins, polycyclic aromatic hydrocarbons (PAH), has been implicated in cardiovascular diseases and AAA . In recent years there has been a growing interest in exploring the metabolic and genetic profiles associated with AAA to understand its multifactorial nature. Previous metabolomics studies on AAA have been limited by small sample sizes and a small number of metabolites analyzed, leading to inconsistent results . Metabolites are crucial for cellular functions, influencing various physiological processes and signaling pathways. Given the several risk factors and metabolic pathways affected in patients with AAA, the metabolomics approach is likely to identify pathological processes in AAA expansion and allow finding novel therapeutic strategies. The identification of circulating biomarkers with diagnostic and prognostic values for the diagnosis of AAA is challenging. A biomarker is a measurable indicator of a biological state or condition that is used to assess health or disease status or to evaluate responses to a therapeutic intervention. Based on the pathophysiology of an AAA, circulating biomarkers can be classified according to their relationship with prothrombotic activity, degradation of the extracellular matrix of the vascular wall, or the immunoinflammatory. To comprehend the impact of the genetic background and associated pathways in the development of AAA, we applied a comprehensive high-throughput liquid chromatography–tandem mass spectroscopy (LC-MS/MS) in the patients with AAA compared to matched controls in the Finnish population-based METSIM study. 2.1. Baseline Characteristics of AAA Cases and Controls shows the baseline characteristics of the participants with AAA and randomly selected controls. Participants with AAA were older, had higher systolic blood pressure, higher concentration of triglycerides and hs-CRP, lower insulin sensitivity (Matsuda), and a higher percentage of smokers, and statin treatment. 2.2. Differences in the Metabolite Abundances Between the AAA Cases and Controls We found statistically significant differences in 25 metabolites between the participants with AAA and the controls. The participants with AAA had an increased abundance of xenobiotics (n = 10), a co-factor, beta-cryptoxanthin (n = 1), a carbohydrate N-acetylneuraminate (n = 1), 4 amino acids, and 9 lipids when compared to the controls . The most significant differences between the cases and the controls among the xenobiotics were for 2-naphthol sulfate ( p = 3.5 × 10 −9 ), methyl naphthyl sulfate ( p = 1.7 × 10 −7 ), and N-(2-furoyl) glycine ( p = 1.1 × 10 −6 ). Additionally, we found that abundances of a xenobiotic (piperine), a co-factor (β-cryptoxanthin), a lactosylceramide (lactosyl-N-palmitoyl-sphingosine (d18:1/16:0)), and two lysoplasmalogens (1-linoleoyl-GPC (18:2) and 1-stearoyl-GPC (18:0)), were decreased in the cases compared to the controls. After the adjustment for statin medication all xenobiotics, excluding O-cresol sulfate, remain statistically significant as well as beta-cryptoxanthin, N-acetylneuraminate, and four amino acids. Among the nine lipids only 2-hydroxyarachidate remained statistically significant. presents the metabolites nominally associated with AAA ( p < 0.05). We identified 301 metabolites, including 163 positively and 138 negatively associated with AAA, incluiding lipids (n = 135), amino acids (n = 66), and xenobiotics (n = 44). Hippurate, bilirubin, biliverdin, phosphatidylcholine, lysophosphatidylcholine, aspartate, glutamate, and glutamine were nominally associated with AAA. Similar findings have been reported in previous small studies AAA . 2.3. Correlation Between the Metabolites Significantly Associated with AAA After Adjustment illustrates the intercorrelations among 16 metabolites. We observed positive correlations involving N-(2-furoyl)glycine and 3-methylcatechol sulfate, both of which are implicated in detoxification processes. Similarly, lanthionine and C-glycosyltryptophan demonstrated a positive correlation. Further, there were positive correlations among 3-hydroxy-3-methylglutarate, 2-naphthol sulfate, and methylnaphthyl sulfate. Additionally, 3-methylcatechol sulfate, N-(2-furoyl)glycine, and (2,4 or 2,5)-dimethylphenol sulfate also displayed positive associations. Positive relationships were also found between methylnaphthyl sulfate and C-glycosyltryptophan, as well as between 4-vinylphenol sulfate and 2-naphthol sulfate. Inverse correlations were identified between piperine and all other 15 metabolites analyzed. Beta-cryptoxanthin was inversely associated with the same metabolites as piperine. Additionally, inverse correlations were observed between 3-hydroxy-3-methylglutarate and both 2-naphthol sulfate and methylnaphthyl sulfate. Although positively correlated among themselves, these metabolites exhibited inverse relationships with the ketone body metabolite 3-hydroxy-3-methylglutarate, suggesting a complex interplay between energy production and detoxification mechanisms. Methylnaphthyl sulfate and lanthionine also showed inverse correlations. 2.4. Association of Genetic Variants with Metabolites in Patients with AAA We utilized the GWAS database “ https://www.ebi.ac.uk/gwas/ (accessed on 13 November 2024)” to identify the genetic variants and the genes significantly associated with metabolites in the participants with AAA. presents the associations at the genome-wide significance level ( p < 5 × 10 −8 ). For each gene, the genetic variant having the most significant association with the metabolites was selected. Six of the ten xenobiotics were significantly associated with genetic variants, and four were not associated with any genetic variants (methylnapht sulfafe (2), N-(2-furoyl)glycine, 4-vinylcatechol sulfate, (2,4 or 2,5)-dimethylphenol sulfate). Two other metabolites (lipid 2-hydroxyarachidate and amino acid lanthionine) were not associated with any genetic variant). Additionally, three metabolites were associated only with one genetic variant (2-naphthol sulfate, piperine, o-cresol sulfate). A variant of rs169828 of the ARSL gene was significantly associated with 2-naphthol sulfate ( p = 2 × 10 −28 ). The ARSL gene is a sulfatase and makes an enzyme arylsulfatase. The function of this enzyme is not known . O-cresol sulfate was associated with a variant of the SGF29 gene (rs480400, p = 8 × 10 −12 ). SGF29 specifically recognizes and binds methylated “Lys-4” of histone H3, and non-histone proteins that are methylated on Lys residues . 3-ethylcatechol sulfate was associated with the genetic variant of SLC51A (rs6795511, p = 1 × 10 −11 ). Piperine was associated with rs8041357 variant of the ARID3B gene. Gene Ontology annotations related to this gene include RNA polymerase II cis-regulatory region sequence-specific DNA binding and RNA polymerase II cis-regulatory region sequence-specific DNA binding . Members of the ARID family have roles in embryonic patterning, cell lineage gene regulation, cell cycle control, transcriptional regulation, and possibly in chromatin structure modification . N-acetylneuraminate was associated with a genetic variant in ARHGEF3 (rs1354034, p = 4 × 10 −78 ), which regulates skeletal muscle regeneration. Other significant findings include an association of 1-(1-enyl-palmitoyl)-2-linoleoyl-GPC (P-16:0/18:2) with rs56228609-T of the HERPUD1 gene ( p = 1 × 10 −25 ). This gene may play a role in unfolded protein response (UPR) and endoplasmic reticulum-associated protein degradation (ERAD). The HERPUD1 gene expression is induced by UPR, and it has an ER stress response element in its promoter region, while the encoded protein has an N-terminal ubiquitin-like domain, which may interact with the ERAD system. This protein has been shown to interact with presenilin proteins and increase the level of amyloid-beta protein following its overexpression . β-cryptoxanthin was associated with a genetic variant, rs75226183 in RNUE-54P pseudogene. β-cryptoxanthin is known for its antioxidant properties . shows the baseline characteristics of the participants with AAA and randomly selected controls. Participants with AAA were older, had higher systolic blood pressure, higher concentration of triglycerides and hs-CRP, lower insulin sensitivity (Matsuda), and a higher percentage of smokers, and statin treatment. We found statistically significant differences in 25 metabolites between the participants with AAA and the controls. The participants with AAA had an increased abundance of xenobiotics (n = 10), a co-factor, beta-cryptoxanthin (n = 1), a carbohydrate N-acetylneuraminate (n = 1), 4 amino acids, and 9 lipids when compared to the controls . The most significant differences between the cases and the controls among the xenobiotics were for 2-naphthol sulfate ( p = 3.5 × 10 −9 ), methyl naphthyl sulfate ( p = 1.7 × 10 −7 ), and N-(2-furoyl) glycine ( p = 1.1 × 10 −6 ). Additionally, we found that abundances of a xenobiotic (piperine), a co-factor (β-cryptoxanthin), a lactosylceramide (lactosyl-N-palmitoyl-sphingosine (d18:1/16:0)), and two lysoplasmalogens (1-linoleoyl-GPC (18:2) and 1-stearoyl-GPC (18:0)), were decreased in the cases compared to the controls. After the adjustment for statin medication all xenobiotics, excluding O-cresol sulfate, remain statistically significant as well as beta-cryptoxanthin, N-acetylneuraminate, and four amino acids. Among the nine lipids only 2-hydroxyarachidate remained statistically significant. presents the metabolites nominally associated with AAA ( p < 0.05). We identified 301 metabolites, including 163 positively and 138 negatively associated with AAA, incluiding lipids (n = 135), amino acids (n = 66), and xenobiotics (n = 44). Hippurate, bilirubin, biliverdin, phosphatidylcholine, lysophosphatidylcholine, aspartate, glutamate, and glutamine were nominally associated with AAA. Similar findings have been reported in previous small studies AAA . illustrates the intercorrelations among 16 metabolites. We observed positive correlations involving N-(2-furoyl)glycine and 3-methylcatechol sulfate, both of which are implicated in detoxification processes. Similarly, lanthionine and C-glycosyltryptophan demonstrated a positive correlation. Further, there were positive correlations among 3-hydroxy-3-methylglutarate, 2-naphthol sulfate, and methylnaphthyl sulfate. Additionally, 3-methylcatechol sulfate, N-(2-furoyl)glycine, and (2,4 or 2,5)-dimethylphenol sulfate also displayed positive associations. Positive relationships were also found between methylnaphthyl sulfate and C-glycosyltryptophan, as well as between 4-vinylphenol sulfate and 2-naphthol sulfate. Inverse correlations were identified between piperine and all other 15 metabolites analyzed. Beta-cryptoxanthin was inversely associated with the same metabolites as piperine. Additionally, inverse correlations were observed between 3-hydroxy-3-methylglutarate and both 2-naphthol sulfate and methylnaphthyl sulfate. Although positively correlated among themselves, these metabolites exhibited inverse relationships with the ketone body metabolite 3-hydroxy-3-methylglutarate, suggesting a complex interplay between energy production and detoxification mechanisms. Methylnaphthyl sulfate and lanthionine also showed inverse correlations. We utilized the GWAS database “ https://www.ebi.ac.uk/gwas/ (accessed on 13 November 2024)” to identify the genetic variants and the genes significantly associated with metabolites in the participants with AAA. presents the associations at the genome-wide significance level ( p < 5 × 10 −8 ). For each gene, the genetic variant having the most significant association with the metabolites was selected. Six of the ten xenobiotics were significantly associated with genetic variants, and four were not associated with any genetic variants (methylnapht sulfafe (2), N-(2-furoyl)glycine, 4-vinylcatechol sulfate, (2,4 or 2,5)-dimethylphenol sulfate). Two other metabolites (lipid 2-hydroxyarachidate and amino acid lanthionine) were not associated with any genetic variant). Additionally, three metabolites were associated only with one genetic variant (2-naphthol sulfate, piperine, o-cresol sulfate). A variant of rs169828 of the ARSL gene was significantly associated with 2-naphthol sulfate ( p = 2 × 10 −28 ). The ARSL gene is a sulfatase and makes an enzyme arylsulfatase. The function of this enzyme is not known . O-cresol sulfate was associated with a variant of the SGF29 gene (rs480400, p = 8 × 10 −12 ). SGF29 specifically recognizes and binds methylated “Lys-4” of histone H3, and non-histone proteins that are methylated on Lys residues . 3-ethylcatechol sulfate was associated with the genetic variant of SLC51A (rs6795511, p = 1 × 10 −11 ). Piperine was associated with rs8041357 variant of the ARID3B gene. Gene Ontology annotations related to this gene include RNA polymerase II cis-regulatory region sequence-specific DNA binding and RNA polymerase II cis-regulatory region sequence-specific DNA binding . Members of the ARID family have roles in embryonic patterning, cell lineage gene regulation, cell cycle control, transcriptional regulation, and possibly in chromatin structure modification . N-acetylneuraminate was associated with a genetic variant in ARHGEF3 (rs1354034, p = 4 × 10 −78 ), which regulates skeletal muscle regeneration. Other significant findings include an association of 1-(1-enyl-palmitoyl)-2-linoleoyl-GPC (P-16:0/18:2) with rs56228609-T of the HERPUD1 gene ( p = 1 × 10 −25 ). This gene may play a role in unfolded protein response (UPR) and endoplasmic reticulum-associated protein degradation (ERAD). The HERPUD1 gene expression is induced by UPR, and it has an ER stress response element in its promoter region, while the encoded protein has an N-terminal ubiquitin-like domain, which may interact with the ERAD system. This protein has been shown to interact with presenilin proteins and increase the level of amyloid-beta protein following its overexpression . β-cryptoxanthin was associated with a genetic variant, rs75226183 in RNUE-54P pseudogene. β-cryptoxanthin is known for its antioxidant properties . Our study integrates metabolomics and genetics to explore potential biomarkers associated with AAA in the METSIM cohort. Our study reports several novel findings. We found that among the 25 metabolites identified, 16 metabolites remained statistically significant after the adjustment for statin medication. Among them, nine xenobiotics were significantly associated with AAA. Xenobiotics are chemical substances including plants, drugs, pesticides, food additives, chemicals, and environmental pollutants . We observed that the abundance of xenobiotics was significantly increased in the participants with AAA compared to the controls, suggesting that an altered xenobiotic metabolism contributes to the pathophysiological processes resulting in aneurysm formation. Our findings indicate that exposure to environmental toxins, including PAHs, pesticides, and herbicides, plays a major role in the development of AAA . Three metabolites positively associated with AAA in our study have been previously associated with tobacco smoking (2-naphthol sulfate, methylnaphthyl sulfate, and 4-vinylphenol sulfate) . Exposure to PAHs increases the risk of cardiovascular diseases, including AAA . PAH compounds are organic compounds found in tobacco and tobacco smoke, formed primarily during the incomplete combustion of organic materials . PAHs induce endothelial dysfunction, oxidative stress, and inflammation, which weaken the aortic wall, resulting in vascular remodeling, arterial stiffness, and plaque formation . Chronic PAH exposure increases the risk of AAA, especially in individuals having several risk factors for AAA. We found one metabolite is a derivative of pesticides/herbicides, (2,4 or 2,5)-dimethylphenol sulfate. Pesticides and herbicides increase the risk of AAA by causing oxidative stress, inflammation, and arterial stiffness by degrading structural proteins, such as elastin and collagen, which are important for the integrity of the aorta . Additionally, pesticides disrupt normal metabolic and immune processes that regulate vascular tissue remodeling, particularly in individuals with several risk factors for AAA . The significant metabolite differences we observed between the AAA cases and controls, particularly in the xenobiotics class, suggest a potential dysregulation in the body’s ability to process and eliminate these compounds in patients with AAA. Increased abundances of 2-naphthol sulfate, methylnaphthyl sulfate, and 4-vinylphenol sulfate indicate an altered xenobiotic metabolism, which could contribute to pathophysiological processes leading to aneurysm formation and progression. These findings support the hypothesis that exposure to environmental toxins increases the risk of AAA, as suggested by the studies linking cardiovascular diseases and environmental pollutants PAHs, pesticides, and herbicides to an increased risk of AAA . N-(2-furoyl)glycine is a metabolite generated by microbiota and found in food prepared by strong heat. This metabolite belongs to the class of N-acyl-alpha amino acids and is a product of fatty acid catabolism and regulates mitochondrial fatty acid beta-oxidation . N-(2-furoyl)glycine participates in the pathways increasing oxidative stress, inflammation, and mitochondrial dysfunction, which are risk factors for cardiovascular diseases . We found an association between the carbohydrate conjugate N-acetyl-alpha-neuraminate, a sialic acid found on the surface of various cell types, and a positive association with AAA. Sialic acids play a vital role in mediating cell–cell and cell–molecule interactions in eukaryotes, and they can be used by pathogens like E. coli to evade host immune responses . Infections of bacterial and fungal origin are known to contribute to the development of infectious AAA, which is associated with an elevated risk of aneurysm rupture . Abundance of piperine was found to be decreased in the participants with AAA. Piperine has many pharmacological effects and several health benefits, especially against chronic diseases, such as increased insulin sensitivity, anti-inflammatory effects, and improvement of hepatic steatosis . Piperine has been shown to attenuate pathological cardiac fibrosis via PPAR-γ/AKT pathways . Inflammatory abdominal aortic aneurysm is characterized by extensive fibrosis, thickened walls, and dense adhesions observed in 3–10% of all cases of AAAs. Surgery is technically challenging and is associated with increased morbidity and mortality . We found an increased abundance of piperidine in the controls, suggesting a protective effect against AAA. We performed a correlation analysis between the 16 metabolites . We identified positive correlations among 3-hydroxy-3-methylglutarate, 2-naphthol sulfate, and methylnaphthyl sulfate, which play significant roles in energy metabolism and detoxification. This suggests intricate interactions in metabolic pathways crucial for breaking down harmful compounds. Additionally, we observed important relationships within detoxification pathways involving 3-methylcatechol sulfate, N-(2-furoyl)glycine, and dimethylphenol sulfate, essential for managing oxidative stress and inflammation. We also noted inverse correlations between piperine, beta-cryptoxanthin, and metabolites related to amino acid metabolism and detoxification. These findings indicate that piperine and beta-cryptoxanthin might influence nutrient absorption and metabolism, pointing to potential impacts on metabolic health. We identified significant genetic associations with metabolites linked to AAA. Key findings include the association of the ARSL gene variant rs169828 with 2-naphthol sulfate. Additionally, the variants in the SLC51A , ARID3B , and ARHGEF3 genes were associated with 3-ethylcatechol sulfate, piperine, and N-acetylneuraminate. Additionally, the pseudogene RNUE-54P was linked to β-cryptoxanthin, known for its antioxidant properties . These findings not only enhance our understanding of the genetic factors influencing metabolite variations associated with AAA but also point to specific molecular pathways that may be targeted for therapeutic intervention, particularly the interaction between protective lipid molecules and protein response mechanisms. Previous studies have reported associations of several metabolites with AAA, including hippurate, biliverdin, bilirubin, proline, glycerol, aspartate, glutamate, glutamine, proline, citric acid, 2-oxoglutaric acid, succinic acid, phosphatidylcholines, and lysophosphatidylcholines . However, these findings have not been consistent in other studies. In our study, using the nominally significant threshold ( p < 0.05), we were able to replicate most of these findings, including the associations with hippurate, biliverdin, bilirubin, phosphatidylcholine, lysophosphatidylcholines, aspartate, glutamate, and glutamine. This suggests that increasing the sample size is essential to identify new metabolites significantly associated with AAA. summarizes our findings and underscores the critical links between xenobiotics in the development of AAA, influenced by lifestyle, environmental toxins, and genetic factors. We identified key metabolites and genetic variants contributing to the pathophysiology of AAA. In summary, our study provides a comprehensive analysis of the metabolomics and genetic risk factors associated with AAA in the METSIM cohort, offering new insights into the pathophysiology of AAA. We identified eight xenobiotics positively associated with AAA, highlighting the role of environmental toxins such as PAHs, pesticides, and herbicides in increasing vascular remodeling, oxidative stress, and inflammation . Genetic analyses revealed associations between the key metabolites, and specific genetic variants, emphasizing an independent effect of genetic predisposition on the risk of AAA. The preventive role of the metabolites, especially piperine and beta-cryptoxanthin, suggests potential therapeutic avenues. The limitations of our study are that it included only middle-aged and elderly Finnish men, which limits the generalizability of our findings to women and other populations. Additionally, the use of cross-sectional data restricts our ability to establish causality between metabolic factors and AAA. Future studies are needed to investigate metabolomics and genetics in larger, more diverse populations to validate our findings in the pathophysiology and etiology of AAA. Longitudinal studies tracking metabolic changes over time in individuals at high risk for AAA may help to identify early biomarkers, providing an opportunity for preventive interventions. Given the strong associations between xenobiotics and AAA, studies into environmental and lifestyle modifications could offer practical applications to reduce the incidence of AAA. Further exploration of the metabolic pathways involved in xenobiotic metabolism may lead to the development of targeted therapies to mitigate harmful effects of environmental toxins. The identification of potential biomarkers also opens avenues for precision medicine approaches, where metabolic profiling could guide personalized monitoring and treatment strategies for individuals at high risk of AAA. 4.1. Study Population The METSIM study includes 10,197 men, aged from 45 to 73 years at baseline, and randomly selected from the population register of Kuopio, Eastern Finland. The METSIM study was approved by the Ethics Committee at the Kuopio University Hospital, Finland. All participants provided written informed consent. The design and methods of the METSIM study have been previously described in detail . A total of 304 men from the METSIM study were included in the current study, 76 participants having AAA and 228 random controls. 4.2. Clinical and Laboratory Measurements Height was measured without shoes to the nearest 0.5 cm. Weight was measured in light clothing with a calibrated digital scale (Seca 877, Hamburg, Germany). Laboratory studies after 12 h fasting included the following measurements: plasma glucose and insulin, lipids, lipoproteins, and mass spectrometry metabolomics (Metabolon, Durham, NC, USA). An oral glucose tolerance test was performed to evaluate glucose tolerance (75 g of glucose). Clinical and laboratory measurement methods have been previously published . Briefly, plasma glucose was measured by enzymatic hexokinase photometric assay (Konelab Systems Reagents, Thermo Fischer Scientific, Vantaa, Finland). Insulin was determined by immunoassay (ADVIA Centaur Insulin IRI, no. 02230141, Siemens Medical Solutions Diagnostics, Tarrytown, NY, USA). Serum alanine aminotransferase (ALT) was measured by an enzymatic photometric test (Konelab Reagent System, Thermo Fisher Scientific, Vantaa, Finland). BMI was calculated as weight divided by height squared. Smoking status was defined as current smoking. Other laboratory measurements have been previously reported . Estimated glomerular filtration rate (eGFR) was calculated using the CKD-Epi equation . 4.3. Metabolomics Metabolites were measured by using Metabolon Inc.’s untargeted Discovery HD4 platform based on ultra-high-performance liquid chromatography–tandem mass spectroscopy (UPLC-MS/MS) (Metabolon, Morrisville, NC, USA). Samples stored at −80 °C prior to analysis were prepared using the automated MicroLab STAR ® system (Hamilton Company, Reno, NV, USA). Several recovery standards were added prior to the first step in the extraction process for quality control (QC) purposes. A pooled matrix sample generated by taking a small volume of each experimental sample served as a technical replicate throughout the dataset. Extracted water samples served as process blanks, and QC standards that were carefully chosen not to interfere with the measurement of endogenous compounds were spiked into every analyzed sample, allowing instrument performance monitoring and aiding chromatographic alignment. Overall process variability was determined by calculating the median relative standard deviation for all endogenous metabolites present in 100% of the pooled matrix samples. A data normalization step was performed to correct variation resulting from instrument inter-day tuning differences in studies spanning multiple days. Experimental samples were randomized across the platform run with QC samples spaced evenly. Raw data were extracted, peak-identified, and QC processed using Metabolon DiscoveryHD4 platform, and peaks were quantified using area under the curve. Compounds were identified by comparison to library entries of purified standards or recurrent unknown entities. Library matches for each compound were checked for each sample and corrected if necessary. Each metabolite was rescaled to set the median equal to 1. The Metabolon DiscoveryHD4 platform identified a total of 1540 metabolites. From this initial set, only metabolites with at least 40% complete data across the dataset were retained, while all metabolites lacking identification information were excluded, resulting in 1009 metabolites for statistical analysis. All samples were processed together for peak quantification and data scaling. Evaluation of overall process variability by the median relative standard deviation for endogenous quantified raw mass spectrometry peaks for each metabolite using the area under the curve and metabolites present in all 20 technical replicates in each batch was performed. Variation was adjusted for day-to-day instrument tuning differences and columns used for biochemical extraction by scaling the raw peak quantifications to the median for each metabolite by the Metabolon batch. 4.4. Selection of Genetic Variants Associated with AAA We identified genetic variants associated with AAA from previous publications and the GWAS Catalog (The NHGRI-EBI Catalog of human genome-wide association studies) “ https://www.ebi.ac.uk/gwas/ (accessed on 13 November 2024)” in individuals of European ancestry. Among the genetic variants for each gene, we selected the variant having the most significant association with AAA. 4.5. Statistical Analysis We conducted statistical analyses using IBM SPSS Statistics, version 29. We log-transformed all continuous variables except for age to correct for their skewed distribution. We used one-way ANOVA to assess the differences in clinical traits and metabolites between the two groups, participants with AAA and controls. We used Bonferroni correction to define the statistical significance threshold. The threshold for statistical significance was ≤5.0 × 10 −5 , given the 1009 metabolites included in the statistical analysis. Correlations between the metabolites were calculated using the Pearson correlation. The correlation figure was generated using the Python programming language. The METSIM study includes 10,197 men, aged from 45 to 73 years at baseline, and randomly selected from the population register of Kuopio, Eastern Finland. The METSIM study was approved by the Ethics Committee at the Kuopio University Hospital, Finland. All participants provided written informed consent. The design and methods of the METSIM study have been previously described in detail . A total of 304 men from the METSIM study were included in the current study, 76 participants having AAA and 228 random controls. Height was measured without shoes to the nearest 0.5 cm. Weight was measured in light clothing with a calibrated digital scale (Seca 877, Hamburg, Germany). Laboratory studies after 12 h fasting included the following measurements: plasma glucose and insulin, lipids, lipoproteins, and mass spectrometry metabolomics (Metabolon, Durham, NC, USA). An oral glucose tolerance test was performed to evaluate glucose tolerance (75 g of glucose). Clinical and laboratory measurement methods have been previously published . Briefly, plasma glucose was measured by enzymatic hexokinase photometric assay (Konelab Systems Reagents, Thermo Fischer Scientific, Vantaa, Finland). Insulin was determined by immunoassay (ADVIA Centaur Insulin IRI, no. 02230141, Siemens Medical Solutions Diagnostics, Tarrytown, NY, USA). Serum alanine aminotransferase (ALT) was measured by an enzymatic photometric test (Konelab Reagent System, Thermo Fisher Scientific, Vantaa, Finland). BMI was calculated as weight divided by height squared. Smoking status was defined as current smoking. Other laboratory measurements have been previously reported . Estimated glomerular filtration rate (eGFR) was calculated using the CKD-Epi equation . Metabolites were measured by using Metabolon Inc.’s untargeted Discovery HD4 platform based on ultra-high-performance liquid chromatography–tandem mass spectroscopy (UPLC-MS/MS) (Metabolon, Morrisville, NC, USA). Samples stored at −80 °C prior to analysis were prepared using the automated MicroLab STAR ® system (Hamilton Company, Reno, NV, USA). Several recovery standards were added prior to the first step in the extraction process for quality control (QC) purposes. A pooled matrix sample generated by taking a small volume of each experimental sample served as a technical replicate throughout the dataset. Extracted water samples served as process blanks, and QC standards that were carefully chosen not to interfere with the measurement of endogenous compounds were spiked into every analyzed sample, allowing instrument performance monitoring and aiding chromatographic alignment. Overall process variability was determined by calculating the median relative standard deviation for all endogenous metabolites present in 100% of the pooled matrix samples. A data normalization step was performed to correct variation resulting from instrument inter-day tuning differences in studies spanning multiple days. Experimental samples were randomized across the platform run with QC samples spaced evenly. Raw data were extracted, peak-identified, and QC processed using Metabolon DiscoveryHD4 platform, and peaks were quantified using area under the curve. Compounds were identified by comparison to library entries of purified standards or recurrent unknown entities. Library matches for each compound were checked for each sample and corrected if necessary. Each metabolite was rescaled to set the median equal to 1. The Metabolon DiscoveryHD4 platform identified a total of 1540 metabolites. From this initial set, only metabolites with at least 40% complete data across the dataset were retained, while all metabolites lacking identification information were excluded, resulting in 1009 metabolites for statistical analysis. All samples were processed together for peak quantification and data scaling. Evaluation of overall process variability by the median relative standard deviation for endogenous quantified raw mass spectrometry peaks for each metabolite using the area under the curve and metabolites present in all 20 technical replicates in each batch was performed. Variation was adjusted for day-to-day instrument tuning differences and columns used for biochemical extraction by scaling the raw peak quantifications to the median for each metabolite by the Metabolon batch. We identified genetic variants associated with AAA from previous publications and the GWAS Catalog (The NHGRI-EBI Catalog of human genome-wide association studies) “ https://www.ebi.ac.uk/gwas/ (accessed on 13 November 2024)” in individuals of European ancestry. Among the genetic variants for each gene, we selected the variant having the most significant association with AAA. We conducted statistical analyses using IBM SPSS Statistics, version 29. We log-transformed all continuous variables except for age to correct for their skewed distribution. We used one-way ANOVA to assess the differences in clinical traits and metabolites between the two groups, participants with AAA and controls. We used Bonferroni correction to define the statistical significance threshold. The threshold for statistical significance was ≤5.0 × 10 −5 , given the 1009 metabolites included in the statistical analysis. Correlations between the metabolites were calculated using the Pearson correlation. The correlation figure was generated using the Python programming language. We applied metabolomics and genetics to identify novel metabolites associated with AAA in the METSIM cohort. Among the 16 metabolites significantly associated with AAA, 10 were xenobiotics linked to lifestyle and environmental exposures, including substances from benzoate metabolism, smoking, and pesticides. Genetic analyses highlighted several genetic variants associated with metabolite abundances positively associated with AAA. Our comprehensive analysis, combining metabolomics with genetic data, robustly demonstrates that lifestyle and environmental influences and genetic factors play significant roles in the etiology of AAA.
Connecting patients with clinical trials using patient navigation: a scoping review protocol
0995b82d-d78e-4483-8790-b882db83783e
11956392
Patient-Centered Care[mh]
Interventions are needed to increase participation in clinical trials through optimised trial design and enrolment workflows. Patient navigation is a promising intervention for increasing participation in clinical trials by optimising enrolment workflows. Patient navigators were introduced in healthcare in the early 1990s in the USA to address inequitable access to care for patients with cancer. Currently, the definition, structure and purpose of patient navigation programmes vary considerably in terms of patient population, disease focus, programme design and implementation. In 2023, Chan et al published the largest overview of systematic reviews of patient navigation across the cancer continuum. They suggested the following definition of patient navigation in cancer care: “an individualized intervention that aims to address barriers and facilitate timely access to healthcare services, diagnosis, treatments and care”. Among the 61 systematic reviews and 53 published studies included by Chan et al , patient navigation was linked to an increase in access to healthcare. Since Chan et al ’s comprehensive review of the evidence on patient navigation in cancer care, there remains a gap in understanding the purpose, design, implementation and impact of navigation programmes aiming to increase participation in clinical trials. Thus, while patient navigation is a promising strategy to improve access to cancer care, the evidence supporting its role in increasing access to clinical trials has not been systematically evaluated. This scoping review aims to critically appraise, synthesise and present the available evidence on using patient navigation to increase clinical trial enrolment. This will help identify gaps in the evidence by summarising what is known about patient navigation that aims to connect patients with clinical trials, thereby identifying areas for future research. A preliminary search of MEDLINE, the Cochrane Database of Systematic Reviews and JBI Evidence Synthesis was conducted, and no existing or ongoing systematic reviews or scoping reviews on the topic were identified. Our scoping review protocol was registered on Open Science Framework ( https://doi.org/10.17605/OSF.IO/BTMYK ). This review was guided based on the JBI methodology for scoping reviews using a five-step review process: identify the research questions; search and identify relevant studies; select studies based on a priori criterion; chart the data; and collate, summarise and report the results according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) extension for scoping reviews. The primary research question for this scoping review was the following: ‘What is the existing evidence on patient navigation interventions aimed at increasing clinical trial enrolment, and how are these interventions characterised in terms of their design, implementation, and outcomes?’ The secondary research questions regarding the use of patient navigation to connect patients to clinical trials included the following: What qualifications (eg, certification, clinical background) are used for patient navigation in clinical trials? What training exists for patient navigation in clinical trials? How do patient navigation interventions address and impact enrolment in clinical trials for equity-deserving or under-represented groups (eg, racial/ethnic minorities, lower socio-economic status populations or geographically isolated communities)? Participants The target population included any individual who may engage with navigation services within the context of healthcare, including, but not limited to, patients, their caregivers and healthcare professionals (eg, physicians, physician assistants and nursing staff). In addition, there was no geographical limitation. Concept The central concept being evaluated in this scoping review is patient navigation in clinical trials. As previously mentioned, the current study adopted the Chan et al definition of patient navigation, which is “an individualized intervention that aims to address barriers and facilitate timely access to healthcare services, diagnosis, and treatments and care”. Patient navigation has been used interchangeably with certain concepts in previous literature, and for the purpose of this review, clarification is warranted. Although patient navigation and care coordination may be viewed as overlapping terms, according to Chan et al , care coordination is a key domain of patient navigation rather than being synonymous with patient navigation. Care coordination primarily focuses on logistical aspects of healthcare, facilitating connections between providers and settings. Meanwhile, patient navigation is protocol-driven, ensuring that each patient follows an appropriate clinical pathway tailored to their needs. Similarly, case management and patient navigation are sometimes used interchangeably, but they serve different functions in healthcare. Case management, according to the Case Management Society of America, is “a collaborative process of assessment, planning, facilitation, care coordination, evaluation and advocacy for options and services to meet an individual’s and family’s comprehensive health needs through communication and available resources to promote patient safety, quality of care, and cost effective outcomes”. In contrast, patient navigation refers to the partnership between the patient and the navigator who is responsible for guiding and supporting a patient through the healthcare process, while promoting self-management through education and emotional support. Therefore, the definition of case management is more system-focused, while patient navigation is seen as more patient-centred. Context Studies that evaluated the use of patient navigation to increase participation in clinical trials across all healthcare settings were included. Types of sources The scoping review included experimental and quasi-experimental study designs, analytical observational studies, descriptive observational study designs and qualitative studies. Systematic reviews and opinion papers were excluded. Systematic reviews were not included because the focus was on the details of the individual interventions, and no existing systematic reviews were identified that addressed this specific question. However, if systematic reviews with a similar scope were found, their reference lists were reviewed to identify relevant studies for the scoping review. Search strategy Studies published since database inception were included until the search was performed on 21 December 2023 and updated on 5 March 2025 (see respectively). The search strategy aimed to locate both published and unpublished studies. Studies published in the English language were included to ensure a consistent and rigorous analysis of the nuanced concept of patient navigation since the review team consists of native English speakers familiar with the subtle linguistic and contextual elements of patient navigation. Additionally, due to resource constraints, systematically translating or reviewing non-English articles was not feasible. An initial limited search of MEDLINE and CINAHL was undertaken to identify articles on the topic. The text words contained in the titles and abstracts of relevant articles, and the index terms used to describe the articles were used to develop a full search strategy for Cochrane CENTRAL (Ovid), MEDLINE (Ovid), EMBASE (Ovid), Cumulative Index of Nursing and Allied Health (CINAHL; on EBSCOhost; EBSCO Industries, Inc), Epistemonikos and PROSPERO databases. Searches were also conducted through the Turning Research into Practice and International Clinical Trials Registry Platform (WHO) databases, Google Scholar and the Agency for Health Research and Quality. The search strategy focused on the following key concepts: navigation (eg, navigator, care coordination, case management) and clinical trials (see ). The search strategy, including all identified keywords and index terms, was adapted for each included database and/or information source. The reference list of all included sources of evidence will be screened for additional studies. Sources of unpublished studies and grey literature will be searched separately from the search strategy above, including conference abstracts, trial registries, thesis repositories, institutional databases, preprint servers, government reports and professional organisation websites. Searches were reviewed using the PRESS Peer Review of Electronic Search Strategies 2015. Study/source of evidence selection We will use DistillerSR software (Evidence Partners In, Ottawa, Canada) to conduct the screening in this review. Identified citations will be collated and uploaded into DistillerSR where duplicates will be removed. All citations will be screened by two independent reviewers. We will conduct a pilot test, where each researcher will first code 10% randomly selected citations. Cohen’s κ coefficient will be used to calculate inter-rater reliability, a statistical approach to assess how well independent coders can agree with each other without any discussion. Cohen’s κ>0.80 is indicative of satisfactory reliability. Following a pilot test, titles and abstracts will then be screened by two independent reviewers for assessment against the inclusion criteria for the review. Potentially relevant sources will be retrieved in full and their citation details imported into the JBI System for the Unified Management, Assessment and Review of Information (JBI, Adelaide, Australia). The full text of selected citations will be assessed in detail against the inclusion criteria by two independent reviewers. Reasons for exclusion of sources of evidence at full text will be recorded and reported in the scoping review. Any disagreements that arise between the reviewers at each stage of the selection process will be resolved through discussion or with an additional reviewer/s. The results of the search and the study inclusion process will be reported in full in the final scoping review and presented in a PRISMA-ScR flow diagram. Data extraction Data will be extracted from papers included in the scoping review by two independent reviewers using a data extraction tool developed by the reviewers. Data will be extracted using tables 1–4 in , which collect data regarding study design and information based on the frameworks RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance), PRISM (Practical Robust Implementation and Sustainability Model) and TIDieR (Template for Intervention Description and Replication) to include key findings relevant to the primary and secondary research questions. By using the RE-AIM, PRISM and TIDieR frameworks, information will be collected on the programme’s impact, its context and fit during implementation and in real-world settings, and sufficient details about the intervention, respectively. The following data will be collected: existing patient navigation interventions along with their design and implementation process, outcomes, required qualifications or training for navigators and information regarding the focus on equity-deserving or underrepresented groups (if applicable). The patient navigation interventions will be classified based on Chan et al ’s domains of patient navigation (see ). These domains represent the most up-to-date and comprehensive review of patient navigation interventions. They include the following domains: care coordination, education/information provision, empowerment, comfort/emotional support, direct care provision, advocacy, language assistance, logistics assistance and financial assistance. Outcomes of patient navigation will be described based on Core Outcome Measures in Effectiveness Trials (COMET) (see ). COMET is a framework that establishes standardised outcome sets, known as the ‘core outcome sets’, which define the minimum outcomes that should be measured and reported for studies evaluating the effectiveness of interventions. Previous work by Tsou et al supports that artificial intelligence (AI) tools can be beneficial for reviews with ≥2500s. We will use DistillerSR’s AI tools to assist with screening. We will cross-check screening results using DistillerSR’s Check for Screening Errors tool. The tool helps users reassess exclusions to catch any potential mistakes. The tool learns patterns from the references the user has already reviewed through 10-fold cross-validation. Instead of using all references at once, it repeatedly trains on different subsets of the data by dividing the reviewed references into 10 subsets. The tool then trains on nine of these folds and tests on the remaining one, repeating the process 10 times, each time using a different subset for testing. This ensures a more robust and generalisable model, reducing the chance of bias. We will use the Screening Prioritization tool, which uses machine-learning algorithms trained on a subset of citations to order citations for screening and present titles and abstracts with the highest probability of meeting the inclusion criteria first. By using the Screening Prioritization tool, we will be able to use a ‘stop screening approach’ in the first-level title and abstract screen. A ‘stop screening approach’ is where the review team stops screening when citations fall beneath a specified probability for inclusion. This is called the ‘estimated recall rate’, and it can be calculated using DistillerSR’s AI tools after a certain number of references have been screened and conflicts resolved (usually at least 2% of the total number of references). In the current study, an estimated recall rate of 95% will be used. This means that the review team has likely identified 95% of all relevant references in the data set at the point where they stopped screening. In other words, the AI tool predicts that only a small percentage (5%) of relevant references remain undiscovered among the citations that were not screened. This allows the team to stop screening early while still being confident that they have captured the vast majority of relevant studies. A recent study by Hamel et al found that when they aimed for a 95% recall rate, they nearly eliminated missed studies. They suggested this happened because the remaining 5% of records were erroneously included in full-text screening because reviewers tended to err on the side of including a study rather than excluding it. This leads to over-inclusiveness while screening; therefore, using an estimated recall rate of 95% can ensure efficiency without significantly missing key studies. Data analysis and presentation The data will be presented in graphic or tabular form. A narrative summary describing how the results relate to the review’s objective and question/s will accompany the graphic and/or tabulated results. A qualitative content analysis was chosen to answer the research questions of this scoping review. Qualitative content analysis is a descriptive method that involves open coding to classify concepts or characteristics into overall categories. Specifically, we will use a deductive content analysis, which is when the analysis is guided by an existing theory, framework or literature to construct an initial coding scheme for data analysis. This approach was deemed a suitable analysis method for this scoping review because existing literature was available for an initial coding scheme. During extraction, patient navigation interventions will be sorted according to Chan et al ’s domain classification for patient navigation care and components for the interventions. Primary outcomes extracted from articles will be classified based on the COMET. A codebook will be established based on Chan et al ’s domain classification and COMET. A draft codebook is provided (see ). The selected articles will be imported into NVivo V.11, a qualitative data analysis software. Two coders will be trained to use NVivo V.11 and familiarised with the codebook. Everything will be coded in duplicate, and another researcher on the team will resolve any conflicts. Once the coders reach a satisfactory level of agreement, they will code the rest of the selected articles independently. Additionally, two independent researchers will extract data using single-word responses to conduct frequency counts of the data based on the following frameworks: RE-AIM, PRISM and TIDieR frameworks (tables 2–4 in ). Similar to the deductive content analysis, everything will be extracted in duplicate, and another researcher on the team will resolve any conflicts. Cohen’s κ coefficient will be used to assess inter-rater reliability for data extraction, similarly to its use with screening of records. Data will be presented in various formats, including tables and a narrative description to summarise the available evidence and results. A heat map will illustrate the number of included studies conducted in each country worldwide, while a waffle chart will highlight the types and percentages of study designs used. Expected timeline After the search was conducted on 21 December 2023, first-level screening (title and abstract) began on 24 December 2023 and ended on 24 August 2024. Second-level screening (full-text) started shortly after and was completed by 2 January 2025. Data extraction began on 5 October 2024 and is currently ongoing. Data analysis is expected to begin in mid-March 2025 and be completed by mid-April 2025. Manuscript writing will then begin in mid-April 2025. A manuscript will be submitted by August 2025. The search strategy and results were updated on 5 March 2025. Therefore, reviewers will complete first-level screening and second-level screening of the new articles retrieved by the end of March 2025. Patient and public involvement This scoping review will include patients and caregivers from the research team when interpreting the results, publishing the review and disseminating the results. The GRIPP2-SF checklist will be used to report patient and public involvement in our scoping review. To interpret the results, a meeting will be held with patients and caregivers, where initial results will be presented. Through discussion, the group will collaboratively refine the results and discussion. In terms of disseminating the results and publishing the review, a meeting will be held once again with patients and caregivers to discuss a plan regarding how to share results. Patients and caregivers will be involved in creating dissemination material through public media. Additionally, group members will discuss their roles, such as contributing to the patient perspective discussions at conferences. The target population included any individual who may engage with navigation services within the context of healthcare, including, but not limited to, patients, their caregivers and healthcare professionals (eg, physicians, physician assistants and nursing staff). In addition, there was no geographical limitation. The central concept being evaluated in this scoping review is patient navigation in clinical trials. As previously mentioned, the current study adopted the Chan et al definition of patient navigation, which is “an individualized intervention that aims to address barriers and facilitate timely access to healthcare services, diagnosis, and treatments and care”. Patient navigation has been used interchangeably with certain concepts in previous literature, and for the purpose of this review, clarification is warranted. Although patient navigation and care coordination may be viewed as overlapping terms, according to Chan et al , care coordination is a key domain of patient navigation rather than being synonymous with patient navigation. Care coordination primarily focuses on logistical aspects of healthcare, facilitating connections between providers and settings. Meanwhile, patient navigation is protocol-driven, ensuring that each patient follows an appropriate clinical pathway tailored to their needs. Similarly, case management and patient navigation are sometimes used interchangeably, but they serve different functions in healthcare. Case management, according to the Case Management Society of America, is “a collaborative process of assessment, planning, facilitation, care coordination, evaluation and advocacy for options and services to meet an individual’s and family’s comprehensive health needs through communication and available resources to promote patient safety, quality of care, and cost effective outcomes”. In contrast, patient navigation refers to the partnership between the patient and the navigator who is responsible for guiding and supporting a patient through the healthcare process, while promoting self-management through education and emotional support. Therefore, the definition of case management is more system-focused, while patient navigation is seen as more patient-centred. Studies that evaluated the use of patient navigation to increase participation in clinical trials across all healthcare settings were included. The scoping review included experimental and quasi-experimental study designs, analytical observational studies, descriptive observational study designs and qualitative studies. Systematic reviews and opinion papers were excluded. Systematic reviews were not included because the focus was on the details of the individual interventions, and no existing systematic reviews were identified that addressed this specific question. However, if systematic reviews with a similar scope were found, their reference lists were reviewed to identify relevant studies for the scoping review. Studies published since database inception were included until the search was performed on 21 December 2023 and updated on 5 March 2025 (see respectively). The search strategy aimed to locate both published and unpublished studies. Studies published in the English language were included to ensure a consistent and rigorous analysis of the nuanced concept of patient navigation since the review team consists of native English speakers familiar with the subtle linguistic and contextual elements of patient navigation. Additionally, due to resource constraints, systematically translating or reviewing non-English articles was not feasible. An initial limited search of MEDLINE and CINAHL was undertaken to identify articles on the topic. The text words contained in the titles and abstracts of relevant articles, and the index terms used to describe the articles were used to develop a full search strategy for Cochrane CENTRAL (Ovid), MEDLINE (Ovid), EMBASE (Ovid), Cumulative Index of Nursing and Allied Health (CINAHL; on EBSCOhost; EBSCO Industries, Inc), Epistemonikos and PROSPERO databases. Searches were also conducted through the Turning Research into Practice and International Clinical Trials Registry Platform (WHO) databases, Google Scholar and the Agency for Health Research and Quality. The search strategy focused on the following key concepts: navigation (eg, navigator, care coordination, case management) and clinical trials (see ). The search strategy, including all identified keywords and index terms, was adapted for each included database and/or information source. The reference list of all included sources of evidence will be screened for additional studies. Sources of unpublished studies and grey literature will be searched separately from the search strategy above, including conference abstracts, trial registries, thesis repositories, institutional databases, preprint servers, government reports and professional organisation websites. Searches were reviewed using the PRESS Peer Review of Electronic Search Strategies 2015. We will use DistillerSR software (Evidence Partners In, Ottawa, Canada) to conduct the screening in this review. Identified citations will be collated and uploaded into DistillerSR where duplicates will be removed. All citations will be screened by two independent reviewers. We will conduct a pilot test, where each researcher will first code 10% randomly selected citations. Cohen’s κ coefficient will be used to calculate inter-rater reliability, a statistical approach to assess how well independent coders can agree with each other without any discussion. Cohen’s κ>0.80 is indicative of satisfactory reliability. Following a pilot test, titles and abstracts will then be screened by two independent reviewers for assessment against the inclusion criteria for the review. Potentially relevant sources will be retrieved in full and their citation details imported into the JBI System for the Unified Management, Assessment and Review of Information (JBI, Adelaide, Australia). The full text of selected citations will be assessed in detail against the inclusion criteria by two independent reviewers. Reasons for exclusion of sources of evidence at full text will be recorded and reported in the scoping review. Any disagreements that arise between the reviewers at each stage of the selection process will be resolved through discussion or with an additional reviewer/s. The results of the search and the study inclusion process will be reported in full in the final scoping review and presented in a PRISMA-ScR flow diagram. Data will be extracted from papers included in the scoping review by two independent reviewers using a data extraction tool developed by the reviewers. Data will be extracted using tables 1–4 in , which collect data regarding study design and information based on the frameworks RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance), PRISM (Practical Robust Implementation and Sustainability Model) and TIDieR (Template for Intervention Description and Replication) to include key findings relevant to the primary and secondary research questions. By using the RE-AIM, PRISM and TIDieR frameworks, information will be collected on the programme’s impact, its context and fit during implementation and in real-world settings, and sufficient details about the intervention, respectively. The following data will be collected: existing patient navigation interventions along with their design and implementation process, outcomes, required qualifications or training for navigators and information regarding the focus on equity-deserving or underrepresented groups (if applicable). The patient navigation interventions will be classified based on Chan et al ’s domains of patient navigation (see ). These domains represent the most up-to-date and comprehensive review of patient navigation interventions. They include the following domains: care coordination, education/information provision, empowerment, comfort/emotional support, direct care provision, advocacy, language assistance, logistics assistance and financial assistance. Outcomes of patient navigation will be described based on Core Outcome Measures in Effectiveness Trials (COMET) (see ). COMET is a framework that establishes standardised outcome sets, known as the ‘core outcome sets’, which define the minimum outcomes that should be measured and reported for studies evaluating the effectiveness of interventions. Previous work by Tsou et al supports that artificial intelligence (AI) tools can be beneficial for reviews with ≥2500s. We will use DistillerSR’s AI tools to assist with screening. We will cross-check screening results using DistillerSR’s Check for Screening Errors tool. The tool helps users reassess exclusions to catch any potential mistakes. The tool learns patterns from the references the user has already reviewed through 10-fold cross-validation. Instead of using all references at once, it repeatedly trains on different subsets of the data by dividing the reviewed references into 10 subsets. The tool then trains on nine of these folds and tests on the remaining one, repeating the process 10 times, each time using a different subset for testing. This ensures a more robust and generalisable model, reducing the chance of bias. We will use the Screening Prioritization tool, which uses machine-learning algorithms trained on a subset of citations to order citations for screening and present titles and abstracts with the highest probability of meeting the inclusion criteria first. By using the Screening Prioritization tool, we will be able to use a ‘stop screening approach’ in the first-level title and abstract screen. A ‘stop screening approach’ is where the review team stops screening when citations fall beneath a specified probability for inclusion. This is called the ‘estimated recall rate’, and it can be calculated using DistillerSR’s AI tools after a certain number of references have been screened and conflicts resolved (usually at least 2% of the total number of references). In the current study, an estimated recall rate of 95% will be used. This means that the review team has likely identified 95% of all relevant references in the data set at the point where they stopped screening. In other words, the AI tool predicts that only a small percentage (5%) of relevant references remain undiscovered among the citations that were not screened. This allows the team to stop screening early while still being confident that they have captured the vast majority of relevant studies. A recent study by Hamel et al found that when they aimed for a 95% recall rate, they nearly eliminated missed studies. They suggested this happened because the remaining 5% of records were erroneously included in full-text screening because reviewers tended to err on the side of including a study rather than excluding it. This leads to over-inclusiveness while screening; therefore, using an estimated recall rate of 95% can ensure efficiency without significantly missing key studies. The data will be presented in graphic or tabular form. A narrative summary describing how the results relate to the review’s objective and question/s will accompany the graphic and/or tabulated results. A qualitative content analysis was chosen to answer the research questions of this scoping review. Qualitative content analysis is a descriptive method that involves open coding to classify concepts or characteristics into overall categories. Specifically, we will use a deductive content analysis, which is when the analysis is guided by an existing theory, framework or literature to construct an initial coding scheme for data analysis. This approach was deemed a suitable analysis method for this scoping review because existing literature was available for an initial coding scheme. During extraction, patient navigation interventions will be sorted according to Chan et al ’s domain classification for patient navigation care and components for the interventions. Primary outcomes extracted from articles will be classified based on the COMET. A codebook will be established based on Chan et al ’s domain classification and COMET. A draft codebook is provided (see ). The selected articles will be imported into NVivo V.11, a qualitative data analysis software. Two coders will be trained to use NVivo V.11 and familiarised with the codebook. Everything will be coded in duplicate, and another researcher on the team will resolve any conflicts. Once the coders reach a satisfactory level of agreement, they will code the rest of the selected articles independently. Additionally, two independent researchers will extract data using single-word responses to conduct frequency counts of the data based on the following frameworks: RE-AIM, PRISM and TIDieR frameworks (tables 2–4 in ). Similar to the deductive content analysis, everything will be extracted in duplicate, and another researcher on the team will resolve any conflicts. Cohen’s κ coefficient will be used to assess inter-rater reliability for data extraction, similarly to its use with screening of records. Data will be presented in various formats, including tables and a narrative description to summarise the available evidence and results. A heat map will illustrate the number of included studies conducted in each country worldwide, while a waffle chart will highlight the types and percentages of study designs used. After the search was conducted on 21 December 2023, first-level screening (title and abstract) began on 24 December 2023 and ended on 24 August 2024. Second-level screening (full-text) started shortly after and was completed by 2 January 2025. Data extraction began on 5 October 2024 and is currently ongoing. Data analysis is expected to begin in mid-March 2025 and be completed by mid-April 2025. Manuscript writing will then begin in mid-April 2025. A manuscript will be submitted by August 2025. The search strategy and results were updated on 5 March 2025. Therefore, reviewers will complete first-level screening and second-level screening of the new articles retrieved by the end of March 2025. This scoping review will include patients and caregivers from the research team when interpreting the results, publishing the review and disseminating the results. The GRIPP2-SF checklist will be used to report patient and public involvement in our scoping review. To interpret the results, a meeting will be held with patients and caregivers, where initial results will be presented. Through discussion, the group will collaboratively refine the results and discussion. In terms of disseminating the results and publishing the review, a meeting will be held once again with patients and caregivers to discuss a plan regarding how to share results. Patients and caregivers will be involved in creating dissemination material through public media. Additionally, group members will discuss their roles, such as contributing to the patient perspective discussions at conferences. This scoping review is secondary research that identifies and analyses existing and publicly available research. Therefore, ethics approval was not required. Throughout the lifecycle of this scoping review, experts in clinical trials and patient navigation will be consulted for insights and feedback as part of integrated knowledge translation activities. Traditional end-of-grant knowledge translation activities will be undertaken, including peer-reviewed publications and scientific presentations. This scoping review will inform the development of future research proposals. 10.1136/bmjopen-2024-088828 online supplemental file 1 10.1136/bmjopen-2024-088828 online supplemental file 2 10.1136/bmjopen-2024-088828 online supplemental file 3 10.1136/bmjopen-2024-088828 online supplemental file 4
The Effect and Treatment of PIK3CA Mutations in Breast Cancer: Current Understanding and Future Directions
9bb134ab-96b7-4dc1-8e80-e18b15bf9301
11944057
Neoplasms[mh]
Breast cancer represents one of the most prevalent causes of cancer-related mortality among females worldwide . A significant proportion of breast cancer patients exhibit genetic mutations. According to cBioportal.gov (invasive breast carcinoma, TCGA, GDC), a public platform for large-scale cancer genomics data sets, breast cancer demonstrates a notable prevalence of gene mutations, particularly in PIK3CA (34%), which constitutes one of the most frequently mutated genes in breast cancer. PIK3CA serves as the catalytic subunit of PI3Ks, which are phosphoinositide 3-kinases that function as signal transducers in various signaling pathways. The PI3K family is categorized into three classes—I, II, and III, based on their coding genes, distinct structures, and substrate preferences. Class I PI3Ks comprise the catalytic subunit p110 and regulatory subunit p85. The p110 subunit consists of p110α, p110β, and p110δ, encoded by PIK3CA, PIK3CB, and PIK3CD, respectively. The p85 subunit comprises p85α, p85β, and p85γ, which are encoded by PIK3R1, PIK3R2, and PIK3R3, respectively . PIK3CA is located on chromosome 3q26.3, which is 34 kb in length. PIK3CA comprises 20 exons that encode a protein of 1068 amino acids with a molecular mass of 124kDa . Exon 9 encodes the helical domain of the PIK3CA catalytic subunit. E542K and E545K, which are hotspot mutations in exon 9, result in the overactivation of the PIK3CA downstream pathway through the alteration of domains and the disruption of the inhibitory interaction between p110α and p85α . Exon 20 encodes the kinase domains. H1047R, the hotspot mutation in exon 20, reduces the rate of apoptosis in mammary tumors p85α. Mutations in PIK3CA usually result in poor prognosis in breast cancer. A study of 217 Asian female breast cancer patients demonstrated that PIK3CA mutations were associated with poor prognosis. The relapse-free survival (RFS) rate is significantly shorter in PIK3CA-mutated patients compared to PIK3CA wild-type patients . In human epidermal growth factor receptor 2-positive (HER2+) breast cancer patients undergoing HER2-targeted treatment, patients with PIK3CA mutations showed lower progression-free survival (PFS) than those with wild-type PIK3CA (hazard ratio (HR) 4.602, 95% CI 2.057–10.514, p < 0.001) . In hormone receptor-positive/human epidermal growth factor receptor 2-negative (HR+/HER2−) metastatic breast cancer patients, PIK3CA mutations are associated with low overall survival (OS) and PFS . Another study reported that PIK3CA-mutated HR+/HER2-metastatic breast cancer demonstrated inferior overall survival (OS) compared to wild-type PIK3CA (HR 1.44, 95% CI 1.02–2.03) . Patients with wild-type PIK3CA had better OS (HR 0.161, p = 0.010) and disease-free survival (DFS) (HR 0.376, p = 0.069) than those with PIK3CA mutations in HER2− breast cancer patients. Estrogen receptor-positive (ER+) patients with wild-type PIK3CA had better OS (HR 0.203, p = 0.058) compared to those with PIK3CA mutations . These findings suggest that PIK3CA mutations contribute to cell proliferation, apoptosis suppression, and tumorigenesis through the activation of the PI3K/AKT/mTOR pathway. This review aimed to investigate publications and explore therapeutic approaches for PIK3CA-mutant breast cancer. First, we analyzed PIK3CA mutations in biological mechanisms using public databases and several publications. Second, we systematically reviewed clinical trials targeting PIK3CA mutations in breast cancer. Finally, this study examined noncoding RNAs as potential therapeutic targets for PIK3CA-mutant breast cancer. 2.1. Public Platform STRINGdb, https://string-db.org (accessed on 23 December 2024), is a widely used online portal for Protein–Protein Interaction Networks Functional Enrichment Analysis. We analyzed the proteins that showed significant interactions with PIK3CA proteins using STRINGdb. GO analysis is a commonly used technique for large-scale functional enrichment research. The proteins which were listed from STRINGdb were characterized according to the functional roles, such as the biological process (BP), cellular component (CC), and molecular function (MF), by the Database for Annotation, Visualization, and Integrated Discovery tools ( https://david.ncifcrf.gov/home.jsp ) (accessed on 23 December 2024). 2.2. Literature Research PubMed ( https://pubmed.ncbi.nlm.nih.gov , firstly accessed on 10 December 2024), was used to search English-language articles related to PIK3CA mutation in breast cancer. The searched articles were published from 1 January 2014 to 31 October 2024, on the keywords PIK3CA and breast cancer. 2.3. Search for Clinical Trial ClinicalTrial.gov ( https://clinicaltrials.gov , accessed on 10 December 2024) is a widely used website for clinical trials from around the world. Clinical trials were searched using the following options: breast cancer (condition/disease) and PI3K gene mutation (other terms). Of the above clinical trials, we selected those in which the study status was no longer looking for participants and which was completed. 2.4. Statistical Analysis Gene ontology data were downloaded from the Functional Annotation Chart records in the DAVID tools. Data were analyzed using R statistical software version 4.3.0 (R Foundation for Statistical Computing, Vienna, Austria). STRINGdb, https://string-db.org (accessed on 23 December 2024), is a widely used online portal for Protein–Protein Interaction Networks Functional Enrichment Analysis. We analyzed the proteins that showed significant interactions with PIK3CA proteins using STRINGdb. GO analysis is a commonly used technique for large-scale functional enrichment research. The proteins which were listed from STRINGdb were characterized according to the functional roles, such as the biological process (BP), cellular component (CC), and molecular function (MF), by the Database for Annotation, Visualization, and Integrated Discovery tools ( https://david.ncifcrf.gov/home.jsp ) (accessed on 23 December 2024). PubMed ( https://pubmed.ncbi.nlm.nih.gov , firstly accessed on 10 December 2024), was used to search English-language articles related to PIK3CA mutation in breast cancer. The searched articles were published from 1 January 2014 to 31 October 2024, on the keywords PIK3CA and breast cancer. ClinicalTrial.gov ( https://clinicaltrials.gov , accessed on 10 December 2024) is a widely used website for clinical trials from around the world. Clinical trials were searched using the following options: breast cancer (condition/disease) and PI3K gene mutation (other terms). Of the above clinical trials, we selected those in which the study status was no longer looking for participants and which was completed. Gene ontology data were downloaded from the Functional Annotation Chart records in the DAVID tools. Data were analyzed using R statistical software version 4.3.0 (R Foundation for Statistical Computing, Vienna, Austria). 3.1. The Effect of PIK3CA Mutations on the PI3K/AKT/mTOR Pathway The association between PIK3CA and the PI3K/AKT/mTOR pathway was analyzed using a public database. STRINGdb ( ) illustrated the direct interaction and potential functional relation among PIK3CA and other proteins: Phosphatidylinositol 3-kinase regulatory subunit beta (PIK3R2), Phosphatidylinositol 3-kinase regulatory subunit gamma (PIK3R3), epidermal growth factor receptor (EGFR), insulin receptor substrate 1 (IRS1), phosphoinositide 3-kinase regulatory subunit 5 (PIK3R5), Phosphatidylinositol 3-kinase regulatory subunit alpha (PIK3RA), GTPase KRas (KRAS), Phosphatidylinositol 4,5-bisphosphate 3-kinase catalytic subunit delta isoform (PIK3CD), RAC-alpha serine/threonine-protein kinase (AKT1), and Phosphatidylinositol 4,5-bisphosphate 3-kinase catalytic subunit beta isoform (PIK3CB). These proteins were categorized into three functional groups in the gene ontology analysis ( , ): biological process (BP), cellular component (CC), and molecular function (MF). GO analysis showed that BP had the highest regulation of signaling pathways. CC is mainly associated with lamellipodium, cell to cell junction, cytoplasm, and nucleus. MF is primarily associated with protein binding and kinase activity. PI3K signaling plays pivotal roles in various cellular processes, including cell proliferation, transport within the cells, and survival. The PI3K/AKT/mTOR pathway ( ) is initiated by the activation of PI3K through receptor tyrosine kinases (RTKs) [ , , ]. Activated RTKs bind to the regulatory subunit p85 of PI3K, resulting in the activation of the catalytic subunit p110 of PI3K . Upon the activation of PI3K, phosphatidylinositol 4,5-bisphosphate (PIP2) is converted to phosphatidylinositol 3,4,5-trisphosphate (PIP3) via ATP-induced phosphorylation . PIP3 binds to intracellular proteins, serine/threonine protein kinase (AKT), and 3-phosphoinositide-dependent kinase 1 (PDK1). PIP3 functions as a secondary messenger, recruiting AKT and PDK1 to the plasma membrane . The hydrophobic pocket of PDK1 induces autophosphorylation and the phosphorylating activation loop T308 of AKT, resulting in AKT activation . Activated AKT translocates from the plasma membrane to the cytoplasm and nucleus, phosphorylating the mammalian target of rapamycin (mTOR), which comprises mTORC1 and mTORC2 [ , , ]. mTORC1 activates ribosomal S6 kinase 1 (S6K1), thereby inducing cell growth, differentiation, and proliferation. mTORC1 and S6K1 phosphorylate insulin receptor substrate 1 (IRS1), leading to the inactivation or degradation of IRS1. This process creates a negative feedback loop in the PI3K/AKT/mTOR pathway. Conversely, mTORC2 activates AKT, establishing a positive feedback loop . The overactivation of mTOR is significantly associated with human tumors . The PTEN gene, which encodes phosphatase and tensin homolog, regulates the PI3K/AKT pathway by dephosphorylating PIP3 to PIP2. PTEN plays a significant role in maintaining normal cell function and inhibiting aberrant cell proliferation . Several studies have indicated that PIK3CA mutations activate the PI3K/AKT/mTOR pathway in breast cancer . Cell viability assays, which were conducted with triple-negative breast cancer (TNBC) cell lines harboring PIK3CA hotspot mutations, demonstrated a significant reduction in the proportion of early cell apoptosis in the PIK3CA mutant group compared to the PIK3CA wild-type group ( p = 0.04) . Furthermore, additional research has revealed that the hotspot mutations of PIK3CA enhance the interaction between p110 and lipid membranes . This interaction results in the dissociation of p110α from both the catalytic core and the p85 regulatory region. The mutations of p110α enhance the membrane binding capacity of p110, thereby activating PI3K signaling . The abnormal activity of the PI3K/AKT/mTOR pathway often induces cellular overgrowth and apoptosis resistance and tumor progression . The PI3K/AKT/mTOR pathway cross-regulates with the MAPK pathway, which comprises a protein cascade that transmits signals from the cell surface to the DNA in the cell nucleus . In the MAPK pathway, RTKs activate rat sarcoma virus (RAS), which subsequently recruits and activates rapidly accelerated fibrosarcoma (RAF), mitogen-activated protein kinase kinase (MEK), and extracellular regulated kinase (ERK). Activated AKT inhibits RAF, thereby inducing the negative regulation of the MAPK pathway. Conversely, ERK phosphorylates mTORC1, activating the PI3K/AKT/mTOR pathway. Mutations in either the PI3K/AKT/mTOR pathway or the MAPK pathway may induce the dysregulation of signal transduction and alterations in the feedback loop, potentially leading to tumor development . Several studies have demonstrated that the PI3K signaling pathway plays a significant role in the diagnosis, prognosis, and treatment of breast cancer [ , , ]. A study analyzed the mutational pattern of the gene in 589 Chinese women. Gene mutations in the PI3K/AKT/mTOR pathway were detected in 62.6% of cases, with the most commonly mutated genes being PIK3CA (45%), PTEN (7.5%), and AKT1 (5.9%) . Consequently, PIK3CA mutations are crucial for PI3K-targeted breast cancer treatment. To evaluate PI3K-targeted treatment, the impact of PIK3CA mutations should be taken into consideration. 3.2. Clinical Trials of PI3K Inhibitors in Breast Cancer Patients with PIK3CA Mutations PI3K inhibitors are categorized into three types: pan-PI3K inhibitors, isoform-specific PI3K inhibitors, and dual PI3K/mTOR inhibitors. Pan-PI3K inhibitors inhibit all isoforms of class PI3K, and alpelisib, pictilisib (GDC-0941), and buparlisib belong to pan-pi3k inhibitors. Isoform-specific PI3K inhibitors target specific isoforms of PI3K. An example of such an inhibitor is alpelisib, which specifically blocks the p110α isoform of PI3K. Similarly, MEN1611 and inavolisib are selective inhibitors targeting the p110α form of PI3K. Samotolisib and capivasertib are dual PI3K/mTOR inhibitors, targeting both PI3K and mTOR in the PI3K/AKT/mTOR pathway. listed clinical trials which were conducted using PI3K inhibitors in PIK3CA-mutant breast cancer according to ClinicalTrial.gov ( https://clinicaltrials.gov ). A randomized phase III clinical trial (NCT02340221) demonstrated that taselisib, a pan-PI3K inhibitor, exhibited favorable prognostic outcomes in ER+/HER2− breast cancer. The results indicated that PFS was significantly prolonged in the taselisib group compared to the placebo group (7.4 months vs. 5.4 months, HR = 0.7, p = 0.004). The taselisib group showed higher rates of objective response (ORR), clinical benefit (CBR), and duration of objective response compared to the placebo group. Although taselisib had effects on ER+/HER2− breast cancer, this study revealed a high rate of drug discontinuation and dose reduction (36.5%) due to toxicity such as infections, alopecia, pyrexia, and dyspepsia . A randomized study (NCT01437566) investigated the efficacy of pictilisib in ER+ endocrine-resistant breast cancer. The findings indicated that median progression-free survival (mPFS) was higher in the pictilisib group compared to the placebo group for both PIK3CA wild-type and mutant type; however, the difference was not statistical significance. In addition, gastrointestinal and skin toxicities were more observed in the pictilisib group than in the placebo group . In the phase 1b clinical trial (NCT01219699), postmenopausal ER+ breast cancer patients received alpelisib in combination with fulvestrant following antiestrogen therapy. The patients were stratified into two subgroups based on their tumor-tissue PIK3CA mutation status. The ORR of the PIK3CA mutant group was 29% (95% CI 17–43%), while the PIK3CA wild-type group had no overall response. The mPFS was observed to be higher in the PIK3CA mutant group (9.1 months, 95% CI 6.6–14.6) compared to the PIK3CA wild-type group (4.7 months, 95% CI 1.9–5.6). No dose-limiting toxicities (DLTs) were reported with alpelisib administered at doses ranging from 300 mg to 350 mg, whereas 10% of patients who received 400 mg of alpelisib exhibited DLTs, including diarrhea and vomiting. Furthermore, 11% of patients who received 400 mg of alpelisib in combination with fulvestrant experienced DLTs such as hyperglycemia and rash . In the phase I/II study (NCT02379247), patients with HER2− metastatic breast cancer received alpelisib in combination with nab-paclitaxel following chemotherapy. The study population was stratified into two groups based on PIK3CA mutation status: mutated or wild-type. PFS was higher in the PIK3CA mutation group than in the PIK3CA wild-type group (PFS: 11.9 vs. 7.5 months; HR = 0.44, p = 0.027). While the common adverse events were diarrhea, hyperglycemia, and rash, there were no patients who discontinued treatment due to those adverse events . In the SOLAR-1 phase II study (NCT03056755), patients with HR+/HER2− advanced breast cancer harboring PIK3CA mutations received alpelisib plus fulvestrant following CD4/6 inhibitor treatment with the aromatase inhibitor . In the primary endpoint, 50.4% of participants exhibited PFS at 6 months (95% CI 41.2–59.6). The ORR was 17% (95% CI 11–25), and primary results showed that CBR was 45% (95% CI 36–55). Twenty-one percent of patients discontinued the clinical trial due to adverse effects, with rash being the most prevalent. No treatment-related fatalities were reported. The SOLAR-1 trial (NCT02437318) demonstrated that the long-term administration of alpelisib plus fulvestrant in HR+/HER2− breast cancer patients with PI3KCA- mutations contributes to improved overall survival (OS). Furthermore, the alpelisib group exhibited a lower rate of treatment discontinuation compared to the placebo group . In the phase 1b clinical trial (NCT03767335), HER2+ advanced breast cancer patients with the PIK3CA mutant received MEN1611 plus only trastuzumab (Group A) or MEN1611 plus trastuzumab and fulvestrant (Group B). The proportion of stable disease was similar between the two groups, while the partial response rates were 36% (Group A) and 28% (Group B), respectively. In addition, 20% of all patients, regardless of fulvestrant administration, experienced treatment-emergent adverse events, with the most frequent being diarrhea (64.3%), nausea (42.8%), and asthenia (31%). While most of the adverse events were reversible and manageable, some adverse events caused treatment interruption (33.3%) and dose reduction (16.7%), primarily due to hyperglycemia, diarrhea, nausea, and asthenia . In the phase Ib study (NCT02124148), nine of the TNBC patients with the PIK3CA mutation received samotolisib plus prexasertib, resulting in an ORR of 13.3%. During dose escalation, DLTs were not observed; however, dose reduction toxicities were noted due to leukopenia, neutropenia, thrombocytopenia, and nausea (52.8%) . A double-blind randomized study (NCT04191499) evaluated the efficacy and safety of inavolisib in combination with palbociclib and fulvestrant in HR+/HER2− advanced breast cancer. The results demonstrated that mPFS was significantly longer in the inavolisib group than in the placebo group (15 vs. 7.3 months). The ORR of the inavolisib group was 58.4%, whereas the placebo group exhibited an ORR of 25.0%. The inavolisib group showed a higher discontinuation rate due to adverse events compared to the placebo group (6.8% vs. 0.6%), while disease progression or death was lower (HR 0.43; p < 0.001) . In a randomized phase III clinical trial (NCT04305496), capivasertib in combination with fulvestrant demonstrated efficacy in the treatment of HR+/HER2− metastatic breast cancer patients. However, discontinuation due to adverse events occurred in 13% of the capivasertib group, compared to 2% in the placebo group . Although capivasertib was not a PI3K inhibitor, it was approved for PIK3CA-mutant patients. The above clinical trials suggested that PI3K inhibitors had an effect on the prognosis of PIK3CA mutations in breast cancer patients. The most prevalent side effects were gastrointestinal in nature, such as diarrhea and nausea, as well as hyperglycemia and cutaneous eruptions. 3.3. Future Directions: Noncoding RNAs Targeting Gene Mutations Noncoding RNAs (ncRNAs) have emerged as potential molecular-level therapeutic agents for various diseases, including cancer, due to their ability to regulate the expression of genes associated with cell cycle or proliferation . MicroRNAs (miRNAs) are small regulatory ncRNA molecules, typically comprising approximately 22 nucleotides. miRNAs bind to the 3 UTRs of target mRNAs, resulting in either the degradation of mRNAs or the repression of target gene translation . It suggests that miRNAs can assume dual roles as either tumor suppressors or oncogenes, contingent upon their specific target genes. In their capacity as tumor suppressors, miRNAs inhibit the expression of oncogenes and impede uncontrolled cell growth and division . For example, miR-760 targets the HM13 gene which performs oncogenic function by activating the PI3K/AKT/mTOR pathway and promoting cell proliferation in breast cancer . miR-944 regulates the PI3K/AKT/mTOR pathway by inhibiting SPP1. It was suggested that the downregulation of miR-944 facilitates the expression of SPP1, which subsequently stimulates the PI3K/AKT/mTOR pathway in in vitro experiments with breast cancer cells. In vivo experiments confirmed that miR-944 plays the anticancer role, suppressing SPP1 and the PI3K/AKT/mTOR pathway in breast cancer . miR-203a can also act as a tumor suppressor through directly interacting with PIK3CA. This interaction controls the activation level of the PI3K/AKT/mTOR pathway and can be overactivated by PIK3CA mutations . Conversely, some miRNAs can function as oncogenes by activating the PI3K/AKT/mTOR pathway. miR193 inhibits growth family member 5 (ING5), which suppresses the PI3K/AKT/mTOR pathway. miR193 binds to the 3 UTRs of ING5, thereby promoting cell growth and invasion in breast cancer [ , , ]. In addition, miRNAs could induce drug resistance breast cancer. miR-205 plays a crucial role in modulating resistance mechanisms in breast cancer through its influence on epithelial–mesenchymal transition (EMT), cancer stem cells properties, and treatment responses. Specifically, miR-205 inhibits EMT by targeting transcriptional repressors, consequently leading to increased invasiveness and resistance to therapeutic interventions . The downregulation of miR-205 contributes to therapy resistance as it leads to enhanced EMT and the maintenance of cancer stem cell properties . Furthermore, the interaction between miRNAs and PI3K inhibitors could regulate key oncogenic pathways in the PI3K/AKT/mTOR pathway and improve targeted cancer therapies. For instance, miR-1287-5p can interact with PI3K inhibitors. miR-1287-5p directly targets the PIK3CB gene, which encodes the p110β catalytic subunit of PI3K, thereby downregulating the PI3K/AKT/mTOR pathway. This interaction leads to reduced proliferation and the increased apoptosis of TNBC cells. Also, the overexpression of miR-1287-5p sensitizes TNBC cells to PI3K inhibitors, suggesting a potential therapeutic strategy that combines miR-1287-5p modulation with PI3K inhibition to enhance anticancer efficacy . Long noncoding RNAs (lncRNAs) are heterogeneous ncRNAs exceeding 200 nucleotides in length. lncRNAs facilitate gene regulation, functioning as scaffolds for protein complexes, sequestering miRNAs, or engaging in direct interactions with chromatin to modulate gene expression. lncRNAs have been extensively implicated in the progression and development of cancer as either oncogenes or tumor suppressors . For example, LINC01133 stimulated mTORC2, leading to the activation of AKT and cell proliferation in TNBC . However, GAS5 exerts its tumor-suppressive effects by interacting with the PI3K/AKT/mTOR pathway through the upregulation of PTEN and the direct inhibition of AKT activation. It suggests that GAS5 possesses potential as a therapeutic target in breast cancer treatment . Furthermore, lnRNA H19 plays a critical role in mediating drug resistance in breast cancer through the modulation of drug efflux, apoptosis, and miRNA interactions . H19 may influence the expression of tumor-suppressive miRNAs and drug efflux transporters, resulting in drug resistance and decreased intracellular concentrations of chemotherapeutic agents. In addition, H19 has been implicated in the inhibition of apoptosis, enabling cancer cells to survive despite chemotherapy-induced DNA damage . In addition, DUXAP8 exhibited overexpression in breast cancer and demonstrated association with poor prognosis, including radiation resistance . It suggested that lncRNAs may serve as potential therapeutic targets in PIK3CA mutant breast cancer. Circular RNAs (circRNAs) constitute a distinct category of ncRNAs due to their structural characteristics. circRNAs are formed as closed loops of single-stranded RNAs derived from pre-mRNAs, wherein the 5’ terminus is covalently linked to the 3’ terminus . circRNAs exhibit remarkable stability, potentially impeding interactions with target mRNAs. A study demonstrated that circ-ARHGER28 could serve as a potential diagnostic marker of breast cancer (AUC = 0.889, 95% CI 0.780–0.998, p < 0.01), resulting from a lower expression level in breast cancer tissue than in paracarcinoma tissue. Furthermore, circ-ARHGER28 suppresses mRNA expressions of PI3K, AKT, and mTOR. circRNAs function as effective miRNA sponges. For instance, circTFF1 sponges miR-326, subsequently promoting the progression of breast cancer. An in vitro study showed that MCF-7 cells, knockdowns of circ-TFF1, decreased cell migration, proliferation, and invasion. These studies demonstrated that ncRNAs exhibit significant roles for PIK3CA mutant breast cancer by modulating the PI3K/AKT/mTOR pathway and inhibiting tumorigenicity. They suggested that ncRNAs may serve as potential therapeutic targets or agents for PIK3CA mutant breast cancer by modulating the expression levels of PIK3CA genes. However, further research is necessary to facilitate the clinical application of ncRNAs. The association between PIK3CA and the PI3K/AKT/mTOR pathway was analyzed using a public database. STRINGdb ( ) illustrated the direct interaction and potential functional relation among PIK3CA and other proteins: Phosphatidylinositol 3-kinase regulatory subunit beta (PIK3R2), Phosphatidylinositol 3-kinase regulatory subunit gamma (PIK3R3), epidermal growth factor receptor (EGFR), insulin receptor substrate 1 (IRS1), phosphoinositide 3-kinase regulatory subunit 5 (PIK3R5), Phosphatidylinositol 3-kinase regulatory subunit alpha (PIK3RA), GTPase KRas (KRAS), Phosphatidylinositol 4,5-bisphosphate 3-kinase catalytic subunit delta isoform (PIK3CD), RAC-alpha serine/threonine-protein kinase (AKT1), and Phosphatidylinositol 4,5-bisphosphate 3-kinase catalytic subunit beta isoform (PIK3CB). These proteins were categorized into three functional groups in the gene ontology analysis ( , ): biological process (BP), cellular component (CC), and molecular function (MF). GO analysis showed that BP had the highest regulation of signaling pathways. CC is mainly associated with lamellipodium, cell to cell junction, cytoplasm, and nucleus. MF is primarily associated with protein binding and kinase activity. PI3K signaling plays pivotal roles in various cellular processes, including cell proliferation, transport within the cells, and survival. The PI3K/AKT/mTOR pathway ( ) is initiated by the activation of PI3K through receptor tyrosine kinases (RTKs) [ , , ]. Activated RTKs bind to the regulatory subunit p85 of PI3K, resulting in the activation of the catalytic subunit p110 of PI3K . Upon the activation of PI3K, phosphatidylinositol 4,5-bisphosphate (PIP2) is converted to phosphatidylinositol 3,4,5-trisphosphate (PIP3) via ATP-induced phosphorylation . PIP3 binds to intracellular proteins, serine/threonine protein kinase (AKT), and 3-phosphoinositide-dependent kinase 1 (PDK1). PIP3 functions as a secondary messenger, recruiting AKT and PDK1 to the plasma membrane . The hydrophobic pocket of PDK1 induces autophosphorylation and the phosphorylating activation loop T308 of AKT, resulting in AKT activation . Activated AKT translocates from the plasma membrane to the cytoplasm and nucleus, phosphorylating the mammalian target of rapamycin (mTOR), which comprises mTORC1 and mTORC2 [ , , ]. mTORC1 activates ribosomal S6 kinase 1 (S6K1), thereby inducing cell growth, differentiation, and proliferation. mTORC1 and S6K1 phosphorylate insulin receptor substrate 1 (IRS1), leading to the inactivation or degradation of IRS1. This process creates a negative feedback loop in the PI3K/AKT/mTOR pathway. Conversely, mTORC2 activates AKT, establishing a positive feedback loop . The overactivation of mTOR is significantly associated with human tumors . The PTEN gene, which encodes phosphatase and tensin homolog, regulates the PI3K/AKT pathway by dephosphorylating PIP3 to PIP2. PTEN plays a significant role in maintaining normal cell function and inhibiting aberrant cell proliferation . Several studies have indicated that PIK3CA mutations activate the PI3K/AKT/mTOR pathway in breast cancer . Cell viability assays, which were conducted with triple-negative breast cancer (TNBC) cell lines harboring PIK3CA hotspot mutations, demonstrated a significant reduction in the proportion of early cell apoptosis in the PIK3CA mutant group compared to the PIK3CA wild-type group ( p = 0.04) . Furthermore, additional research has revealed that the hotspot mutations of PIK3CA enhance the interaction between p110 and lipid membranes . This interaction results in the dissociation of p110α from both the catalytic core and the p85 regulatory region. The mutations of p110α enhance the membrane binding capacity of p110, thereby activating PI3K signaling . The abnormal activity of the PI3K/AKT/mTOR pathway often induces cellular overgrowth and apoptosis resistance and tumor progression . The PI3K/AKT/mTOR pathway cross-regulates with the MAPK pathway, which comprises a protein cascade that transmits signals from the cell surface to the DNA in the cell nucleus . In the MAPK pathway, RTKs activate rat sarcoma virus (RAS), which subsequently recruits and activates rapidly accelerated fibrosarcoma (RAF), mitogen-activated protein kinase kinase (MEK), and extracellular regulated kinase (ERK). Activated AKT inhibits RAF, thereby inducing the negative regulation of the MAPK pathway. Conversely, ERK phosphorylates mTORC1, activating the PI3K/AKT/mTOR pathway. Mutations in either the PI3K/AKT/mTOR pathway or the MAPK pathway may induce the dysregulation of signal transduction and alterations in the feedback loop, potentially leading to tumor development . Several studies have demonstrated that the PI3K signaling pathway plays a significant role in the diagnosis, prognosis, and treatment of breast cancer [ , , ]. A study analyzed the mutational pattern of the gene in 589 Chinese women. Gene mutations in the PI3K/AKT/mTOR pathway were detected in 62.6% of cases, with the most commonly mutated genes being PIK3CA (45%), PTEN (7.5%), and AKT1 (5.9%) . Consequently, PIK3CA mutations are crucial for PI3K-targeted breast cancer treatment. To evaluate PI3K-targeted treatment, the impact of PIK3CA mutations should be taken into consideration. PI3K inhibitors are categorized into three types: pan-PI3K inhibitors, isoform-specific PI3K inhibitors, and dual PI3K/mTOR inhibitors. Pan-PI3K inhibitors inhibit all isoforms of class PI3K, and alpelisib, pictilisib (GDC-0941), and buparlisib belong to pan-pi3k inhibitors. Isoform-specific PI3K inhibitors target specific isoforms of PI3K. An example of such an inhibitor is alpelisib, which specifically blocks the p110α isoform of PI3K. Similarly, MEN1611 and inavolisib are selective inhibitors targeting the p110α form of PI3K. Samotolisib and capivasertib are dual PI3K/mTOR inhibitors, targeting both PI3K and mTOR in the PI3K/AKT/mTOR pathway. listed clinical trials which were conducted using PI3K inhibitors in PIK3CA-mutant breast cancer according to ClinicalTrial.gov ( https://clinicaltrials.gov ). A randomized phase III clinical trial (NCT02340221) demonstrated that taselisib, a pan-PI3K inhibitor, exhibited favorable prognostic outcomes in ER+/HER2− breast cancer. The results indicated that PFS was significantly prolonged in the taselisib group compared to the placebo group (7.4 months vs. 5.4 months, HR = 0.7, p = 0.004). The taselisib group showed higher rates of objective response (ORR), clinical benefit (CBR), and duration of objective response compared to the placebo group. Although taselisib had effects on ER+/HER2− breast cancer, this study revealed a high rate of drug discontinuation and dose reduction (36.5%) due to toxicity such as infections, alopecia, pyrexia, and dyspepsia . A randomized study (NCT01437566) investigated the efficacy of pictilisib in ER+ endocrine-resistant breast cancer. The findings indicated that median progression-free survival (mPFS) was higher in the pictilisib group compared to the placebo group for both PIK3CA wild-type and mutant type; however, the difference was not statistical significance. In addition, gastrointestinal and skin toxicities were more observed in the pictilisib group than in the placebo group . In the phase 1b clinical trial (NCT01219699), postmenopausal ER+ breast cancer patients received alpelisib in combination with fulvestrant following antiestrogen therapy. The patients were stratified into two subgroups based on their tumor-tissue PIK3CA mutation status. The ORR of the PIK3CA mutant group was 29% (95% CI 17–43%), while the PIK3CA wild-type group had no overall response. The mPFS was observed to be higher in the PIK3CA mutant group (9.1 months, 95% CI 6.6–14.6) compared to the PIK3CA wild-type group (4.7 months, 95% CI 1.9–5.6). No dose-limiting toxicities (DLTs) were reported with alpelisib administered at doses ranging from 300 mg to 350 mg, whereas 10% of patients who received 400 mg of alpelisib exhibited DLTs, including diarrhea and vomiting. Furthermore, 11% of patients who received 400 mg of alpelisib in combination with fulvestrant experienced DLTs such as hyperglycemia and rash . In the phase I/II study (NCT02379247), patients with HER2− metastatic breast cancer received alpelisib in combination with nab-paclitaxel following chemotherapy. The study population was stratified into two groups based on PIK3CA mutation status: mutated or wild-type. PFS was higher in the PIK3CA mutation group than in the PIK3CA wild-type group (PFS: 11.9 vs. 7.5 months; HR = 0.44, p = 0.027). While the common adverse events were diarrhea, hyperglycemia, and rash, there were no patients who discontinued treatment due to those adverse events . In the SOLAR-1 phase II study (NCT03056755), patients with HR+/HER2− advanced breast cancer harboring PIK3CA mutations received alpelisib plus fulvestrant following CD4/6 inhibitor treatment with the aromatase inhibitor . In the primary endpoint, 50.4% of participants exhibited PFS at 6 months (95% CI 41.2–59.6). The ORR was 17% (95% CI 11–25), and primary results showed that CBR was 45% (95% CI 36–55). Twenty-one percent of patients discontinued the clinical trial due to adverse effects, with rash being the most prevalent. No treatment-related fatalities were reported. The SOLAR-1 trial (NCT02437318) demonstrated that the long-term administration of alpelisib plus fulvestrant in HR+/HER2− breast cancer patients with PI3KCA- mutations contributes to improved overall survival (OS). Furthermore, the alpelisib group exhibited a lower rate of treatment discontinuation compared to the placebo group . In the phase 1b clinical trial (NCT03767335), HER2+ advanced breast cancer patients with the PIK3CA mutant received MEN1611 plus only trastuzumab (Group A) or MEN1611 plus trastuzumab and fulvestrant (Group B). The proportion of stable disease was similar between the two groups, while the partial response rates were 36% (Group A) and 28% (Group B), respectively. In addition, 20% of all patients, regardless of fulvestrant administration, experienced treatment-emergent adverse events, with the most frequent being diarrhea (64.3%), nausea (42.8%), and asthenia (31%). While most of the adverse events were reversible and manageable, some adverse events caused treatment interruption (33.3%) and dose reduction (16.7%), primarily due to hyperglycemia, diarrhea, nausea, and asthenia . In the phase Ib study (NCT02124148), nine of the TNBC patients with the PIK3CA mutation received samotolisib plus prexasertib, resulting in an ORR of 13.3%. During dose escalation, DLTs were not observed; however, dose reduction toxicities were noted due to leukopenia, neutropenia, thrombocytopenia, and nausea (52.8%) . A double-blind randomized study (NCT04191499) evaluated the efficacy and safety of inavolisib in combination with palbociclib and fulvestrant in HR+/HER2− advanced breast cancer. The results demonstrated that mPFS was significantly longer in the inavolisib group than in the placebo group (15 vs. 7.3 months). The ORR of the inavolisib group was 58.4%, whereas the placebo group exhibited an ORR of 25.0%. The inavolisib group showed a higher discontinuation rate due to adverse events compared to the placebo group (6.8% vs. 0.6%), while disease progression or death was lower (HR 0.43; p < 0.001) . In a randomized phase III clinical trial (NCT04305496), capivasertib in combination with fulvestrant demonstrated efficacy in the treatment of HR+/HER2− metastatic breast cancer patients. However, discontinuation due to adverse events occurred in 13% of the capivasertib group, compared to 2% in the placebo group . Although capivasertib was not a PI3K inhibitor, it was approved for PIK3CA-mutant patients. The above clinical trials suggested that PI3K inhibitors had an effect on the prognosis of PIK3CA mutations in breast cancer patients. The most prevalent side effects were gastrointestinal in nature, such as diarrhea and nausea, as well as hyperglycemia and cutaneous eruptions. Noncoding RNAs (ncRNAs) have emerged as potential molecular-level therapeutic agents for various diseases, including cancer, due to their ability to regulate the expression of genes associated with cell cycle or proliferation . MicroRNAs (miRNAs) are small regulatory ncRNA molecules, typically comprising approximately 22 nucleotides. miRNAs bind to the 3 UTRs of target mRNAs, resulting in either the degradation of mRNAs or the repression of target gene translation . It suggests that miRNAs can assume dual roles as either tumor suppressors or oncogenes, contingent upon their specific target genes. In their capacity as tumor suppressors, miRNAs inhibit the expression of oncogenes and impede uncontrolled cell growth and division . For example, miR-760 targets the HM13 gene which performs oncogenic function by activating the PI3K/AKT/mTOR pathway and promoting cell proliferation in breast cancer . miR-944 regulates the PI3K/AKT/mTOR pathway by inhibiting SPP1. It was suggested that the downregulation of miR-944 facilitates the expression of SPP1, which subsequently stimulates the PI3K/AKT/mTOR pathway in in vitro experiments with breast cancer cells. In vivo experiments confirmed that miR-944 plays the anticancer role, suppressing SPP1 and the PI3K/AKT/mTOR pathway in breast cancer . miR-203a can also act as a tumor suppressor through directly interacting with PIK3CA. This interaction controls the activation level of the PI3K/AKT/mTOR pathway and can be overactivated by PIK3CA mutations . Conversely, some miRNAs can function as oncogenes by activating the PI3K/AKT/mTOR pathway. miR193 inhibits growth family member 5 (ING5), which suppresses the PI3K/AKT/mTOR pathway. miR193 binds to the 3 UTRs of ING5, thereby promoting cell growth and invasion in breast cancer [ , , ]. In addition, miRNAs could induce drug resistance breast cancer. miR-205 plays a crucial role in modulating resistance mechanisms in breast cancer through its influence on epithelial–mesenchymal transition (EMT), cancer stem cells properties, and treatment responses. Specifically, miR-205 inhibits EMT by targeting transcriptional repressors, consequently leading to increased invasiveness and resistance to therapeutic interventions . The downregulation of miR-205 contributes to therapy resistance as it leads to enhanced EMT and the maintenance of cancer stem cell properties . Furthermore, the interaction between miRNAs and PI3K inhibitors could regulate key oncogenic pathways in the PI3K/AKT/mTOR pathway and improve targeted cancer therapies. For instance, miR-1287-5p can interact with PI3K inhibitors. miR-1287-5p directly targets the PIK3CB gene, which encodes the p110β catalytic subunit of PI3K, thereby downregulating the PI3K/AKT/mTOR pathway. This interaction leads to reduced proliferation and the increased apoptosis of TNBC cells. Also, the overexpression of miR-1287-5p sensitizes TNBC cells to PI3K inhibitors, suggesting a potential therapeutic strategy that combines miR-1287-5p modulation with PI3K inhibition to enhance anticancer efficacy . Long noncoding RNAs (lncRNAs) are heterogeneous ncRNAs exceeding 200 nucleotides in length. lncRNAs facilitate gene regulation, functioning as scaffolds for protein complexes, sequestering miRNAs, or engaging in direct interactions with chromatin to modulate gene expression. lncRNAs have been extensively implicated in the progression and development of cancer as either oncogenes or tumor suppressors . For example, LINC01133 stimulated mTORC2, leading to the activation of AKT and cell proliferation in TNBC . However, GAS5 exerts its tumor-suppressive effects by interacting with the PI3K/AKT/mTOR pathway through the upregulation of PTEN and the direct inhibition of AKT activation. It suggests that GAS5 possesses potential as a therapeutic target in breast cancer treatment . Furthermore, lnRNA H19 plays a critical role in mediating drug resistance in breast cancer through the modulation of drug efflux, apoptosis, and miRNA interactions . H19 may influence the expression of tumor-suppressive miRNAs and drug efflux transporters, resulting in drug resistance and decreased intracellular concentrations of chemotherapeutic agents. In addition, H19 has been implicated in the inhibition of apoptosis, enabling cancer cells to survive despite chemotherapy-induced DNA damage . In addition, DUXAP8 exhibited overexpression in breast cancer and demonstrated association with poor prognosis, including radiation resistance . It suggested that lncRNAs may serve as potential therapeutic targets in PIK3CA mutant breast cancer. Circular RNAs (circRNAs) constitute a distinct category of ncRNAs due to their structural characteristics. circRNAs are formed as closed loops of single-stranded RNAs derived from pre-mRNAs, wherein the 5’ terminus is covalently linked to the 3’ terminus . circRNAs exhibit remarkable stability, potentially impeding interactions with target mRNAs. A study demonstrated that circ-ARHGER28 could serve as a potential diagnostic marker of breast cancer (AUC = 0.889, 95% CI 0.780–0.998, p < 0.01), resulting from a lower expression level in breast cancer tissue than in paracarcinoma tissue. Furthermore, circ-ARHGER28 suppresses mRNA expressions of PI3K, AKT, and mTOR. circRNAs function as effective miRNA sponges. For instance, circTFF1 sponges miR-326, subsequently promoting the progression of breast cancer. An in vitro study showed that MCF-7 cells, knockdowns of circ-TFF1, decreased cell migration, proliferation, and invasion. These studies demonstrated that ncRNAs exhibit significant roles for PIK3CA mutant breast cancer by modulating the PI3K/AKT/mTOR pathway and inhibiting tumorigenicity. They suggested that ncRNAs may serve as potential therapeutic targets or agents for PIK3CA mutant breast cancer by modulating the expression levels of PIK3CA genes. However, further research is necessary to facilitate the clinical application of ncRNAs. There are several challenges to be solved in cancer treatment, primarily tumor heterogeneity and drug resistance. Tumor heterogeneity in breast cancer significantly influences treatment response and contributes to drug resistance. Breast cancer represents a heterogeneous entity, encompassing diverse subtypes such as HR+/HER2− and TNBC, each characterized by distinct clinical features and responses to therapeutic interventions . The intra-tumoral heterogeneity of tumors encompasses genetic, epigenetic, and microenvironmental heterogeneity. Genetic heterogeneity contributes to drug resistance through the prevention of drug–target binding site interactions, the activation of alternative pathways, or the degradation of regulators. Epigenetic heterogeneity induces resistance phenotypes such as the drug efflux pump. It also promotes cancer stem cell phenotypes through the inhibition of cell cycle regulation, the activation of the cell survival pathway, and the evasion of cell death. Microenvironmental heterogeneity impairs the delivery of therapeutic agents due to the dense extracellular matrix and abnormal vasculature. It also induces cell survival signaling through ECM attachment and inflammatory cytokines. Therefore, the intra-tumoral heterogeneity of PIK3CA-mutant tumors may exhibit varying PIK3CA mutations, resulting in differential responses to PI3K inhibitors . Furthermore, the PI3K/AKT/mTOR pathway involves multiple feedback loops and crosstalk nodes with other signal transduction axes. The dysregulation of the PI3K/AKT/mTOR pathway contributes substantially to the development of cancer drug resistance . Resistance to PI3K inhibitors is frequently mediated by the adaptive activation of the MAPK pathway, primarily through feedback activation of RTKs, the loss of negative regulation, and ERK-driven transcriptional adaptation. Despite the inhibition of PI3K, a reduction in activated S6K1 leads to increased RTK signaling by the loss of S6K1-mediated negative feedback on IRS1. RTK activation enhances MAPK signaling, promoting cell proliferation and survival . In the development of inhibitors targeting the PI3K/AKT/mTOR pathway, it is essential to consider dysregulation by mutations and additional activation, as these factors may contribute to drug resistance. Although the PI3K/AKT/mTOR and MAPK pathways have distinct mechanisms, they share numerous downstream targets that can potentially promote cell proliferation and facilitate drug resistance . To address those challenges in cancer treatment, combination strategies, such as combining PI3K inhibitors with MEK inhibitors for co-targeting the PI3K and MAPK pathways, may enhance efficacy . A study indicated that the dual downstream blockade of the MAPK and PI3K/AKT/mTOR pathways was more effective in a therapeutic strategy for pancreatic cancer . In an in vitro study, non-small-cell lung cancer (NSCLC) cells with PIK3CA mutations induced more effect responses of anti-proliferation and pro-apoptosis through the combined treatment of a dual PI3K/mTOR inhibitor and MEK inhibitor, compared to exclusive use . Molecular profiling has potential to elucidate cancer heterogeneity and drug resistance in breast cancer . A study of cancer genomes with PIK3CA mutations revealed that approximately 15% of breast cancers exhibit multiple PIK3CA mutations, with 95% being double mutations. These double mutations result in enhanced PI3K downstream signaling, cell proliferation, and tumor growth . In addition, the PIK3CA-mutant tumor might respond well to the selective inhibition of PIK3CA, such as copanlisib or alpelisib, rather than PI3K inhibitors, since the activation event is selective . Drug resistance in TNBC cells is facilitated by a coordinated reprogramming of MAPK pathway inhibition. The administration of a multi-targeted tyrosine kinase inhibitor against the upregulated kinases can restore an efficacious drug response in these cells. It suggests that identifying and inhibiting the adaptive response mechanisms of cancer cells can lead to enhanced anti-tumor efficacy . This review analyzed the impact of PIK3CA mutations on breast cancer, a disease that has long affected women worldwide. This study comprehensively examined all clinical trials involving breast cancer patients with PIK3CA mutations. Furthermore, we investigated the potential of ncRNAs as anticancer agents in PIK3CA-mutant breast cancer. Breast cancer exhibits PIK3CA mutations at a frequency of approximately 34%. PIK3CA exerts clinical effects on patients with breast cancer, particularly through the PI3K/AKT/mTOR pathway. The aberrant activity of the PI3K/AKT/mTOR pathway frequently promotes excessive cell division and resistance to apoptosis, contributing to the development and progression of tumors. Several clinical trials have been conducted on PIK3CA-mutant breast cancer. Breast cancer patients with PIK3CA mutations demonstrated distinct outcomes compared to PIK3CA wild-type patients. These results suggest that breast cancer treatment should be conducted based on patients’ molecular subtypes in the future.
COL11A1 as an novel biomarker for breast cancer with machine learning and immunohistochemistry validation
b0f7f2be-dc84-437e-b257-53c69b2184ed
9660229
Anatomy[mh]
Breast cancer is one of the most commonly diagnosed malignant tumors in the world. As the most frequent malignant tumor in women, more than 2.1 million women have been diagnosed with breast cancer in 2018, and approximately 500,000 women have died from this disease . Although advances in early detection and effective systemic treatment have decreased breast cancer mortality rates in North America and the European Union, breast cancer remains the most common cause of cancer death in less developed countries, second only to lung cancer, and almost all patients in the advanced stage have a poor prognosis . Therefore, new therapeutic approaches and goals need to be developed to reduce disease recurrence and death. With the advances in machine learning, we have achieved great success for disease diagnosis, risk stratification, and the establishment of prognostic models , such as using medical imaging and artificial intelligence for the identification of lesions , the discovery of new biomarkers through data mining, drug discovery, and risk model construction . Traditionally, machine learning approaches are divided into supervised learning, unsupervised learning, and reinforcement learning categories. We can predict and classify huge data using machine learning algorithms based on known training data. As reported by Rahman , Linear Regression (LR), Support Vector Machine (SVM), Multi-Layer Perceptron (MLP), and Vector Auto-Regression have been the most widely used algorithms for tackling the Coronavirus pandemic (COVID-19). Thus, our aim was to identify potential prognosis-related biomarkers in breast cancer by computational approaches to assist clinical decision-making. In recent years, immunotherapy has emerged as a novel option for a variety of solid tumors . Unlike other solid cancers, breast cancer is insensitive to immunotherapy. While the recognition of the importance of the tumor microenvironment (TME) in breast cancer progression, response to treatment, and resistance, the assessment of its immune infiltration and stromal cell infiltration has opened the opportunity for breast cancer immunotherapy. Retrospective studies have shown that patients with breast cancer with higher levels of stromal-infiltrating immune cells generally have longer progression-free survival (PFS) and overall survival (OS) , and the results of immune checkpoint inhibitor (ICI) therapies for TNbreast cancer are encouraging . Studies are ongoing to unravel the immunoediting function of the host immune system in breast cancer to identify patients who will benefit from therapy . Collagen type XI alpha 1 (COL11A1) is a type XI collagen, which belongs to the collagen family. Although it is mainly involved in the biological process of bone development , high levels of COL11A1 are associated with tumor metastasis, treatment resistance, and poor clinical outcome in several solid tumors types such as breast, pancreas, and colorectal cancers . Gu et al. showed that COL11A1 was highly expressed in breast cancer tissues, and COL11A1 variant E was also significantly correlated with lymph nodes involvement and metastasis in breast cancers . As an important component of the structure of the extracellular matrix (ECM), COL11A1 was identified as a correlated predictor of dangerous immune infiltrates in pancreatic adenocarcinoma . However, the role of COL11A1 in the TME of breast cancers remains unclear. Data sourcing and pre-processing In this study a total of six breast cancer datasets were included. Clinical and expression profile data of patients originating from The Cancer Genome Atlas Program (TCGA) dataset were downloaded using the TCGA Biolink package . GSE42568, GSE109169, GSE138536, GSE173839, and GSE103668 were derived from the GEO database. GSE42568, includes 104 breast cancer and 17 normal breast biopsies, GSE109169, includes 25 paired breast samples. GSE138536 is a single-cell sequencing data containing 8 breast cancer samples. GSE173839 and GSE103668, includes follow-up information of breast cancer patients receiving immunotherapy. Cell line expression and protein level expression data were obtained from the Cancer Cell Line Encyclopedia (CCLE) and the Clinical Proteomic Tumor Analysis Consortium (CPTAC) databases. Immune infiltration scores were evaluated using the R package ESTIMATE. Differential gene analysis of samples We performed a differential gene analysis of the breast cancer expression profile data comparing TCGA datasets of tumor and normal tissues using the edgeR package, and the threshold criteria were |LogFC| >4, and the adjust p-value less than 0.01. Machine learning identifies feature genes We first defined patients with an OS shorter than 3 years as the short-term survival group, while those with a survival time greater than 3 years were defined as the long-term survival group. We used the random survival forest to identify short-term related feature genes . Machine learning algorithm lasso regression, and Support Vector Machine (SVM) were used to select feature genes . Variables of greater importance than 0.3 in random forests were defined as significant. The lambda with the smallest value was defined as significant for lasso regression. For the SVM algorithm, the top 10 feature support vectors were defined as the important variables. The intersection genes of the three machine learning algorithms were defined as the core genes. Validated expression of hub gene Samples from TCGA and GTEX databases were used to validate the expression of the hub gene at the transcriptome level. GSE42568 and GSE109169 were also used to validate the expression differences of COL11A1 in tumor and normal tissues. Data from the CCLE database were used to validate gene expression differences between different cell lines. The protein-level expression differences of COL11A1 were performed through the CPTAC database. Analysis of the prognosis value of the hub gene To further validate the prognostic value of the core gene, we evaluated the association of the expression of the hub gene on OS, DSS and PFS, respectively, using the R survival package. The cut-off values of the patient subgroups were performed using the R package survminer and component differences were obtained by the log-rank test. Role of the hub gene in the TME The level of immune infiltration was evaluated by the ESTIMATE package, which calculated a stromal score and estimated score in each sample, according to gene expression. Additionally, the IOBR package calculated B cell, cancer-associated fibroblasts (CAF), CD4 T cell, CD8 T cell, endothelial cell, macrophage, natural killer cell, and other cell infiltration scores. The Spearman’s test was used to calculate detailed correlations between core genes and B-cell, CD4 and CD8 immune cell markers. Evaluation of prognosis was associated with the level immune infiltration was performed through the TIMER2 website . Hub gene and relationship with cancer-associated fibroblasts CAFs play an important role in tumor recurrence and resistance to therapy, as the main component of the tumor stroma. Therefore, we further evaluated the correlation of the hub gene with tumor-associated fibroblasts. We first validated the differential expression of this gene in different cell clusters in GSE138536, a single-cell data set. We then calculated the correlation between the hub gene and the classical fibroblast-associated markers, and finally, we evaluated the association between the level of infiltration and the clinical prognosis. Hub gene and relationship with immunotherapy Immunotherapy offers a new pathway for patients, but not all patients can benefit from this option, and screening of the potentially benefitting population is necessary. Considering that immune checkpoints play an important role in tumor immunotherapy, we first examined the correlation between core genes and immune checkpoints using the Pearson’s test. Then two breast cancer immunotherapy datasets, GSE173839 and GSE103668, containing follow-up information, were interrogated to verify the differential expression of the hub gene between the immune response and immune tolerance groups. Additionally, we analyzed the correlation of this gene with 21 genes related to m6A methylation. COL11A1-related immune regulation genes We extracted the expression data of COL11A1 and 150 immune regulation genes, including chemokines , receptors , MHC , immunoinhibitory genes , immunosuppressive genes (46). The Pearson’s correlation between COL11A1 and immune regulation genes was further calculated. Prognosis signature construction and validation for OS Immune regulation genes associated with COL11A1 were put into univariate and multivariate Cox regressions with OS. Univariate significance genes were included in multivariate Cox regression. Then a prognostic signature model was constructed based on the multivariate Cox regression coefficients. An area under the ROC curve (AUC) was used to test the predictive efficiency of the model. Nomogram construction and validation To assess whether the signature had an independent prognostic value compared to other clinical variables, we performed univariate and multivariate cox regression analyses and visualized the regression results to construct a Nomogram model, and the C-index and calibration curve were used to evaluate the predictive efficacy and stability of the model, respectively. Immunohistochemistry of COL11A1 between normal and tumor breast tissues The tissues were washed with PBS and then incubated with 3% H2O2 for 10 min. The antibody including against COL11A1 (1:100, 21841-1-AP, proteintech, CA) were incubated at room temperature for 2 h. After incubation with polymer enhancer for 20 min, the tissue was incubated with polymer enhancer and enzyme-labeled rabbit polymers. Slides were washed with PBS and fresh diaminobenzidine, counterstained with hematoxylin, antigen retrieval performed using 0.1% HCl, dehydrated with ethanol, cleaned with xylene, and fixed with neutral balata. The results were observed and photographed using a fluorescence microscope and visualized under a light microscope at 100× and 200x magnification by a blinded observer. Controls without primary antibodies showed no immunolabeling. Light to dark brown staining indicated a positive result. In this study a total of six breast cancer datasets were included. Clinical and expression profile data of patients originating from The Cancer Genome Atlas Program (TCGA) dataset were downloaded using the TCGA Biolink package . GSE42568, GSE109169, GSE138536, GSE173839, and GSE103668 were derived from the GEO database. GSE42568, includes 104 breast cancer and 17 normal breast biopsies, GSE109169, includes 25 paired breast samples. GSE138536 is a single-cell sequencing data containing 8 breast cancer samples. GSE173839 and GSE103668, includes follow-up information of breast cancer patients receiving immunotherapy. Cell line expression and protein level expression data were obtained from the Cancer Cell Line Encyclopedia (CCLE) and the Clinical Proteomic Tumor Analysis Consortium (CPTAC) databases. Immune infiltration scores were evaluated using the R package ESTIMATE. We performed a differential gene analysis of the breast cancer expression profile data comparing TCGA datasets of tumor and normal tissues using the edgeR package, and the threshold criteria were |LogFC| >4, and the adjust p-value less than 0.01. We first defined patients with an OS shorter than 3 years as the short-term survival group, while those with a survival time greater than 3 years were defined as the long-term survival group. We used the random survival forest to identify short-term related feature genes . Machine learning algorithm lasso regression, and Support Vector Machine (SVM) were used to select feature genes . Variables of greater importance than 0.3 in random forests were defined as significant. The lambda with the smallest value was defined as significant for lasso regression. For the SVM algorithm, the top 10 feature support vectors were defined as the important variables. The intersection genes of the three machine learning algorithms were defined as the core genes. Samples from TCGA and GTEX databases were used to validate the expression of the hub gene at the transcriptome level. GSE42568 and GSE109169 were also used to validate the expression differences of COL11A1 in tumor and normal tissues. Data from the CCLE database were used to validate gene expression differences between different cell lines. The protein-level expression differences of COL11A1 were performed through the CPTAC database. To further validate the prognostic value of the core gene, we evaluated the association of the expression of the hub gene on OS, DSS and PFS, respectively, using the R survival package. The cut-off values of the patient subgroups were performed using the R package survminer and component differences were obtained by the log-rank test. The level of immune infiltration was evaluated by the ESTIMATE package, which calculated a stromal score and estimated score in each sample, according to gene expression. Additionally, the IOBR package calculated B cell, cancer-associated fibroblasts (CAF), CD4 T cell, CD8 T cell, endothelial cell, macrophage, natural killer cell, and other cell infiltration scores. The Spearman’s test was used to calculate detailed correlations between core genes and B-cell, CD4 and CD8 immune cell markers. Evaluation of prognosis was associated with the level immune infiltration was performed through the TIMER2 website . CAFs play an important role in tumor recurrence and resistance to therapy, as the main component of the tumor stroma. Therefore, we further evaluated the correlation of the hub gene with tumor-associated fibroblasts. We first validated the differential expression of this gene in different cell clusters in GSE138536, a single-cell data set. We then calculated the correlation between the hub gene and the classical fibroblast-associated markers, and finally, we evaluated the association between the level of infiltration and the clinical prognosis. Immunotherapy offers a new pathway for patients, but not all patients can benefit from this option, and screening of the potentially benefitting population is necessary. Considering that immune checkpoints play an important role in tumor immunotherapy, we first examined the correlation between core genes and immune checkpoints using the Pearson’s test. Then two breast cancer immunotherapy datasets, GSE173839 and GSE103668, containing follow-up information, were interrogated to verify the differential expression of the hub gene between the immune response and immune tolerance groups. Additionally, we analyzed the correlation of this gene with 21 genes related to m6A methylation. We extracted the expression data of COL11A1 and 150 immune regulation genes, including chemokines , receptors , MHC , immunoinhibitory genes , immunosuppressive genes (46). The Pearson’s correlation between COL11A1 and immune regulation genes was further calculated. Immune regulation genes associated with COL11A1 were put into univariate and multivariate Cox regressions with OS. Univariate significance genes were included in multivariate Cox regression. Then a prognostic signature model was constructed based on the multivariate Cox regression coefficients. An area under the ROC curve (AUC) was used to test the predictive efficiency of the model. To assess whether the signature had an independent prognostic value compared to other clinical variables, we performed univariate and multivariate cox regression analyses and visualized the regression results to construct a Nomogram model, and the C-index and calibration curve were used to evaluate the predictive efficacy and stability of the model, respectively. The tissues were washed with PBS and then incubated with 3% H2O2 for 10 min. The antibody including against COL11A1 (1:100, 21841-1-AP, proteintech, CA) were incubated at room temperature for 2 h. After incubation with polymer enhancer for 20 min, the tissue was incubated with polymer enhancer and enzyme-labeled rabbit polymers. Slides were washed with PBS and fresh diaminobenzidine, counterstained with hematoxylin, antigen retrieval performed using 0.1% HCl, dehydrated with ethanol, cleaned with xylene, and fixed with neutral balata. The results were observed and photographed using a fluorescence microscope and visualized under a light microscope at 100× and 200x magnification by a blinded observer. Controls without primary antibodies showed no immunolabeling. Light to dark brown staining indicated a positive result. Workflow of this study are shown in . Identification of the hub gene COL11A1 A total of 149 up-regulated genes were identified by the differential analysis of tumor and normal tissues in the TCGA dataset . The results are shown in the volcano map . Random survival forest analysis of these differential genes revealed 11 genes with an importance greater than 0.3 . Lasso regression, a machine learning algorithm, was also used for feature variable screening, and a total of 25 candidate genes were selected when the minimum value of lambda was equal to 0.018 . Conversely, the top 10 feature support vectors obtained by the SVM algorithm, were also selected as candidate genes . Inserting the above results in a Venn diagram, we found that only COL11A1 was common to the results of the three algorithms, and thus this gene was identified as the cores gene of the study . COL11A1 is highly expressed in breast cancer samples and is associated with a poor prognosis The heatmap shows the expression of the COL11A1 gene in normal breast cancer tissues and in different cancer tissues. We found that this gene was significantly highly expressed in breast cancer tissues, while it was almost absence in normal tissues . To further verify this result, we performed an expression difference analysis using tumor samples from TCGA and normal samples derived from GTEx, and obtained consistent results . Furthermore, both GEO datasets, GSE42568 and GSE 109169, also confirmed that the expression of COL11A1 was higher in the tumor compared to normal tissues . Furthermore, we also verified that the expression of this gene at the cell line and protein level, and the results suggested that COL11A1 was highly expressed in HCC38, HCC1395, MDAMB157, HCC1954, and ZR751 cell lines and had lower expression in T47D, MCF7, HCC1428, CAMA1, and BT483 cell lines . The results of the protein expression analysis suggested that COL11A1 had higher expression in tumor samples compared to normal tissues . Finally, we performed a survival analysis of this gene and found that high expression of COL11A1 was associated with a poor prognosis in patients, either in terms of OS, disease-specific survival, or progress-free interval . High expression COL10A1 promoted tumor immune infiltration The TME is closely associated with tumor progression and metastasis; thus we explored the correlation between this gene and the TME of breast cancer in the current study. The results suggested that COL11A1 expression was significantly positively correlated with the stromal score (r=0.49, p<0.001) and the ESTIMATE score (r=0.29, p<0.001) in the TME . Furthermore, the results of immune cell infiltration analysis also revealed that the expression of COL11A1 was negatively correlated with the level of B cells, CD4 and CD8 T cells and positively correlated with CAFs . Further analysis of the correlation between this gene and marker genes of B cells, CD4, and CD8 T cells, revealed that COL11A1 was significantly negatively correlated with a marker of B cells, positively correlated with a marker of CD4 T cells, and negatively correlated with a marker of T cells (r=-0.156, r=0.113 and r=-0.160, respectively; p<0.001) . The results of the immune infiltration and survival analysis suggested that in patients with low expression of COL11A1, the degree of B cell infiltration was negatively correlated with patient prognosis. This finding was also applied to the high expression group of COL11A1 expression . However, the level of CD4 and CD8 T cell infiltration was negatively correlated with patient prognosis. These findings further support the association of COL11A1 with tumor immune infiltration and patient prognosis . High expression COL11A1 positively correlated with CAFs CAFs are present in the tumor stroma and contribute to tumor invasion by promoting the epithelial-mesenchymal transition and participating in tumor angiogenesis. We first analyzed the distribution of COL11A1 in different clusters of cells at the single-cell level, and the results suggested that the data were clustered into four clusters, namely myofibroblasts, Mono/macro, epithelial, and fibroblasts . COL11A1 was distributed most significantly in myofibroblasts and fibroblasts. Additionally, the expression of this gene was significantly higher in fibroblasts than in epithelial cells . Further analysis revealed the relationship between COL11A1 and the classical CAF marker gene, and we found that the gene was significantly positively correlated with the CAF marker gene (FAP, PDPN, THY1, ACTA2, COL1A1, PDGFRA, and PDGFRB; p<0.001) . The results of the survival analysis suggested that the deeper the immune infiltration, the worse the prognosis of patients with low expression of COL11A1 and the opposite results of the prognosis analysis of patients with high expression of COL11A1 . COL11A1 predicted the response rate to immunotherapy The results of COL11A1 and immune checkpoints suggest that COL11A1 expression was positively correlated with immune checkpoints (CD276, TIGIT, and ENTPD1; p<0.001) . Further analysis of the results of two immunotherapy data sets revealed that before the analysis of two data sets, 67.39% of the genes overlapped and two data sets had a batch effect, after removal of the effect, new data did not show any batch effect . When we analyzed the differences between COL11A1 expression in the response groups and the absence of response groups, we found that COL11A1 showed high expression in the response group, although this was not significant, while compared to the absence of response samples (p=0.33) . COL11A1 with m6A methylation m6A methylation, as a modification of RNA molecules, has become a hot research topic in the life sciences field in recent years. Studies have shown that genes related to m6A methylation promote tumor progression and may mediate tumor immune tolerance. Therefore, we further analyzed the correlation between this gene and m6A methylation-related genes. The results suggest that this gene is associated with several m6A methylation-related genes . COL11A1 related immune regulation genes and the construction of a five-gene signature A total of 43 related immune regulation genes of COL11A1 were identified from breast cancer samples. To determine the prognostic value of these genes, we constructed predictive models using univariate and multivariate Cox regression. The results of the univariate analysis suggested that a total of 19 immune regulation genes were associated with the prognosis , and the multivariate results demonstrate that 5 immune regulation genes were independent risk factors associated with patient outcomes . Finally, we constructed a 5 gene signature prognostic model based on the above results. We divided patients into the high-risk and low-risk groups based on the median value of the model scores. We found that patients in the high-risk group had a worse prognosis than those in the low-risk group (HR=2.47, 95% CI 1.83-3.33, p<0.001). The area under the model’s ROC curve was 0.658, suggesting that the model had a high predictive value . The internal validation of the model also demonstrated that high-risk patients had a poorer outcome than low-risk patients (HR=2.30, 95% CI 1.47-3.60, p<0.001). The area under the ROC was 0.651 in the validation group, indicating that the model was robust . Nomogram modeling and efficacy evaluation We first evaluated the clinical value of the signature-based on univariate and multivariate COX regression analysis. Univariate results suggested that age, stage, estrogen receptor (ER) positivity, progesterone (PR) positivity, and the risk score could be risk factors that influences the prognosis of patients. The results of the multivariate analysis showed that age, stage, ER status, and risk score were independent prognostic factors for patients . Finally, we visualized the analysis results to construct a nomogram model to predict the OS of the patients . It is noteworthy that when we built the final version of model, PR status was included, although it showed no significance in the multivariate analysis results. Nonetheless, this variable is very important in clinical decision-making. The C-index of this model is 0.776, and the model correction prediction curve had a small bias from the ideal curve, suggesting that the model could predict the OS of patients at 5 and 10 years with more accuracy . COL11A1 expression high in breast cancer with clinical samples Our Immunohistochemistry (IHC) results demonstrate that COL11A1 could express high in breast cancer tissues while compared with normal tissues in clinical samples . A total of 149 up-regulated genes were identified by the differential analysis of tumor and normal tissues in the TCGA dataset . The results are shown in the volcano map . Random survival forest analysis of these differential genes revealed 11 genes with an importance greater than 0.3 . Lasso regression, a machine learning algorithm, was also used for feature variable screening, and a total of 25 candidate genes were selected when the minimum value of lambda was equal to 0.018 . Conversely, the top 10 feature support vectors obtained by the SVM algorithm, were also selected as candidate genes . Inserting the above results in a Venn diagram, we found that only COL11A1 was common to the results of the three algorithms, and thus this gene was identified as the cores gene of the study . The heatmap shows the expression of the COL11A1 gene in normal breast cancer tissues and in different cancer tissues. We found that this gene was significantly highly expressed in breast cancer tissues, while it was almost absence in normal tissues . To further verify this result, we performed an expression difference analysis using tumor samples from TCGA and normal samples derived from GTEx, and obtained consistent results . Furthermore, both GEO datasets, GSE42568 and GSE 109169, also confirmed that the expression of COL11A1 was higher in the tumor compared to normal tissues . Furthermore, we also verified that the expression of this gene at the cell line and protein level, and the results suggested that COL11A1 was highly expressed in HCC38, HCC1395, MDAMB157, HCC1954, and ZR751 cell lines and had lower expression in T47D, MCF7, HCC1428, CAMA1, and BT483 cell lines . The results of the protein expression analysis suggested that COL11A1 had higher expression in tumor samples compared to normal tissues . Finally, we performed a survival analysis of this gene and found that high expression of COL11A1 was associated with a poor prognosis in patients, either in terms of OS, disease-specific survival, or progress-free interval . The TME is closely associated with tumor progression and metastasis; thus we explored the correlation between this gene and the TME of breast cancer in the current study. The results suggested that COL11A1 expression was significantly positively correlated with the stromal score (r=0.49, p<0.001) and the ESTIMATE score (r=0.29, p<0.001) in the TME . Furthermore, the results of immune cell infiltration analysis also revealed that the expression of COL11A1 was negatively correlated with the level of B cells, CD4 and CD8 T cells and positively correlated with CAFs . Further analysis of the correlation between this gene and marker genes of B cells, CD4, and CD8 T cells, revealed that COL11A1 was significantly negatively correlated with a marker of B cells, positively correlated with a marker of CD4 T cells, and negatively correlated with a marker of T cells (r=-0.156, r=0.113 and r=-0.160, respectively; p<0.001) . The results of the immune infiltration and survival analysis suggested that in patients with low expression of COL11A1, the degree of B cell infiltration was negatively correlated with patient prognosis. This finding was also applied to the high expression group of COL11A1 expression . However, the level of CD4 and CD8 T cell infiltration was negatively correlated with patient prognosis. These findings further support the association of COL11A1 with tumor immune infiltration and patient prognosis . CAFs are present in the tumor stroma and contribute to tumor invasion by promoting the epithelial-mesenchymal transition and participating in tumor angiogenesis. We first analyzed the distribution of COL11A1 in different clusters of cells at the single-cell level, and the results suggested that the data were clustered into four clusters, namely myofibroblasts, Mono/macro, epithelial, and fibroblasts . COL11A1 was distributed most significantly in myofibroblasts and fibroblasts. Additionally, the expression of this gene was significantly higher in fibroblasts than in epithelial cells . Further analysis revealed the relationship between COL11A1 and the classical CAF marker gene, and we found that the gene was significantly positively correlated with the CAF marker gene (FAP, PDPN, THY1, ACTA2, COL1A1, PDGFRA, and PDGFRB; p<0.001) . The results of the survival analysis suggested that the deeper the immune infiltration, the worse the prognosis of patients with low expression of COL11A1 and the opposite results of the prognosis analysis of patients with high expression of COL11A1 . The results of COL11A1 and immune checkpoints suggest that COL11A1 expression was positively correlated with immune checkpoints (CD276, TIGIT, and ENTPD1; p<0.001) . Further analysis of the results of two immunotherapy data sets revealed that before the analysis of two data sets, 67.39% of the genes overlapped and two data sets had a batch effect, after removal of the effect, new data did not show any batch effect . When we analyzed the differences between COL11A1 expression in the response groups and the absence of response groups, we found that COL11A1 showed high expression in the response group, although this was not significant, while compared to the absence of response samples (p=0.33) . m6A methylation, as a modification of RNA molecules, has become a hot research topic in the life sciences field in recent years. Studies have shown that genes related to m6A methylation promote tumor progression and may mediate tumor immune tolerance. Therefore, we further analyzed the correlation between this gene and m6A methylation-related genes. The results suggest that this gene is associated with several m6A methylation-related genes . A total of 43 related immune regulation genes of COL11A1 were identified from breast cancer samples. To determine the prognostic value of these genes, we constructed predictive models using univariate and multivariate Cox regression. The results of the univariate analysis suggested that a total of 19 immune regulation genes were associated with the prognosis , and the multivariate results demonstrate that 5 immune regulation genes were independent risk factors associated with patient outcomes . Finally, we constructed a 5 gene signature prognostic model based on the above results. We divided patients into the high-risk and low-risk groups based on the median value of the model scores. We found that patients in the high-risk group had a worse prognosis than those in the low-risk group (HR=2.47, 95% CI 1.83-3.33, p<0.001). The area under the model’s ROC curve was 0.658, suggesting that the model had a high predictive value . The internal validation of the model also demonstrated that high-risk patients had a poorer outcome than low-risk patients (HR=2.30, 95% CI 1.47-3.60, p<0.001). The area under the ROC was 0.651 in the validation group, indicating that the model was robust . We first evaluated the clinical value of the signature-based on univariate and multivariate COX regression analysis. Univariate results suggested that age, stage, estrogen receptor (ER) positivity, progesterone (PR) positivity, and the risk score could be risk factors that influences the prognosis of patients. The results of the multivariate analysis showed that age, stage, ER status, and risk score were independent prognostic factors for patients . Finally, we visualized the analysis results to construct a nomogram model to predict the OS of the patients . It is noteworthy that when we built the final version of model, PR status was included, although it showed no significance in the multivariate analysis results. Nonetheless, this variable is very important in clinical decision-making. The C-index of this model is 0.776, and the model correction prediction curve had a small bias from the ideal curve, suggesting that the model could predict the OS of patients at 5 and 10 years with more accuracy . Our Immunohistochemistry (IHC) results demonstrate that COL11A1 could express high in breast cancer tissues while compared with normal tissues in clinical samples . Breast cancer, as one of the three most common tumors in the world, and seriously threatens women’s health worldwide. Although early-stage breast cancer can be successfully treated with surgery, chemotherapy, or combined therapy, more than 30% of patients diagnosed in early-stage will eventually progress and develop an advanced stage . Advanced breast cancer is incurable with traditional treatments and has a long-term survival rate of less than 5% . These data reveal the urgent need for innovative treatments to reduce relapse and metastasis of breast cancer. The successful application of immunotherapy in a variety of solid tumors and the results of immunological checkpoint antagonists targeting programmed cell death 1 (PD-1) and programmed death ligand-1 (PD-L1) in metastatic breast cancer have raised interest in the area of immune-based strategies for breast cancer . Therefore, it is of great significance to explore new immune-related biomarkers to predict treatment response and as a predictor of prognosis . We found that COL11A1 was highly expressed at both the transcriptome and protein levels in breast cancer tissues and could serve as a marker of a poor prognosis. Furthermore, we also found that COL11A1 was positively correlated with risk factors in the breast cancer TME. Finally, based on the above results, we identified a COL11A1-associated immunological signature as a predictor in breast cancer. COL11A1 is located on chromosome 1p21.1, which encodes one of the three alpha chains of type XI collagen and plays a role in the development of skeletal development and fibrillogenesis. But the expression and biological function of COL11A1 in cancers are still controversial and tumor-specific. Some studies have reported that COL11A1 is highly expressed and correlated with poor prognosis in breast cancers, while its expression is low and serves as a good prognostic indicator in some hematological tumors . Therefore, more precise mechanisms of COL11A1 should be explored in breast cancers. The composition of immune cells and stromal cells in the TME has been known to play an important role in metastasis, immune escape, and therapeutic resistance in cancers . Pearce et al. showed that the COL11A1-related signature was positively correlated with Treg and TH2 in ovarian cancer specimens, demonstrating a poorer prognosis. In a recent study, the high expression of COL11A1 was positively correlated with CD4+T and CD8+T cells, tumor-associated macrophages (TAM), neutrophils and dendritic cells in colon adenocarcinoma, while the function of these immune cells in colon adenocarcinoma TME has not been identified . These results suggest that, as one of the components of ECM, COL11A1 may be affected by variable TME in different cancer contexts. The early stage of mammary tumorigenesis is characterized as a stage of acute inflammation, which could activate innate immune cells, such as neutrophils, dendritic cells (DC), and tumor-specific T cells, to eliminate breast cancer cells. While transformed cells escape elimination and a chronic inflammation-like TME is established, which is mainly composed of suppressive immune cells, CAFs, and endothelial cells, leading to immune evasion of advanced breast cancers . As shown in our study, COL11A1 had a negative correlation with immune cells (B cells, CD4+ T cells, CD8+ T cells, natural killer cells, and macrophages) in the TME, but showed a positive correlation with CAFs and endothelial cells, which was consistent with the results indicating that overexpression of COL11A1 was only observed in CAF-enriched areas of different cancers and was associated with poorer prognosis . All these results implied that COL11A1 could be involved in the tumor immune evasion process and could act as a poor immune-related biomarker in breast cancers. As representative of ICIs, the Food and Drug Administration has approved treatment with anti-PD-1 and anti-PD-L1 monoclonal antibodies for metastatic triple negative breast cancer (TNbreast cancer) immunotherapy. Faced with these options for breast cancer, it is critical to select potential breast cancer patients populations that could benefit from ICI treatment . In addition to PD-L1 expression and tumor mutational burden (TMB), some studies have proposed that TME characteristics could also be used as an indicator to predict response to ICI treatment . Furthermore, a retrospective study identified Meflin as serve as a predictive marker of CAF, which could increase the sensitivity to ICI treatment . In our study, patients with higher expression of COL11A1 also showed a better response to ICI treatment, which indicates that COL11A1 has candidate potential to predict response to ICIs treatment, in addition to being an immune-related biomarker for prognosis. However, previous studies have shown the opposite predictive role of COL11A1 in response to PD1 checkpoint immunotherapy, reconfirming the heterogeneity and complexity of TME in cancers . Thus, it is of great importance to take advantage of multi-omics methods and computational algorithms to interpret the function of genes at the single cell level in different contexts. With the development of next-generation sequencing technology and computational intelligence techniques, more and more disease markers are being identified, and drugs developed based on these targets will greatly improve patients’ clinical benefits in the future . In the present study, we used machine learning to identify a new breast cancer marker and further confirm its potential to become a new target. However, relying on a single gene to predict the patient’s prognosis presents drawbacks because, due to the heterogeneity of the disease, disease development can be associated with the abnormal expression of multiple genes. Therefore, we screened five genes related to the immune pathway associated with COL11A1 in breast cancer and constructed a signature to assess the prognosis of the patient based on these genes. This signature also independently predicted patient prognosis compared with patient clinical variables, implying that our multigene signature had high predictive efficacy. In addition, we constructed the NOMOGRAM model, a visual predictive tool based on signature and clinical variables, which, compared with single-gene models and models containing only clinical information. This tool, compared with single-gene models and models containing only clinical information, showed richer predictive properties and greatly enhanced the clinical value of the model. We identified COL11A1 as a potential therapeutic target in breast cancer through machine learning, and the high expression of this gene was generally associated with a poor prognosis. Additionally, this gene was also closely associated with breast cancer tumor immune infiltrating cells and could be involved in the tumor immune infiltration process. However, there are some limitations in our study. First, additional machine learning algorithms need to be attempted and elaborately combined to obtain accurate training results. Second, single-cell sequencing data from breast cancer should be included to further clarify the relationship between COL11A1 and the TME in breast cancer. Third, additional clinical RCTs are needed to confirm the predictivity of COL11A1 in the immunotherapy response of breast cancers. Fourth, the possible role of COL11A1 involved in the TME of breast cancers should be further explored through basic research studies. The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ . The studies involving human participants were reviewed and approved by Ethics Committee of Hainan Medical University. The patients/participants provided their written informed consent to participate in this study. WS, ZC (2 nd author), HL, WP, and CL: Conceptualization, data curation, formal analysis, roles/writing—original draft, writing—review and editing. GW,RF, ZC (8 th author), and GC: Roles/writing—original draft. PF, WP and CL: Funding acquisition, methodology, project administration, resources, supervision. All authors contributed to the article and approved the submitted version. We thank CM for providing experiment validation of COL11A1 with IHC technique. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
A randomized controlled trial comparing community lifestyle interventions to improve adherence to diet and physical activity recommendations: the VitalUM study
85b18ac7-9dce-4476-a314-9130f508dd8f
8032577
Health Communication[mh]
Meeting the guidelines for fruit and vegetable consumption and physical activity (PA) lowers the risk for cardiovascular morbidity and mortality . Therefore, it is recommended to refer adults with an unhealthy lifestyle to interventions that promote lifestyle change . Dutch guidelines advise that individuals consume at least two servings (approximately 200 grams) of fruit and 200 grams of vegetables every day and engage in moderately intensive PA on at least five days per week, for 30 or more minutes a day . Many adults, however, do not meet the public health recommendations for these behaviours. Approximately half of the Dutch general population (aged 40-75 years) is sufficiently physically active , and about one-third meet the fruit- and vegetable recommendation . Therefore, interventions are needed to promote adherence to these guidelines, especially interventions that can be implemented at scale considering the population in need is significant . Previous studies have indicated that theory-based computer tailoring and (telephone) motivational interviewing have the potential to reach large populations and change health behaviours . Hence, few studies have compared these methods in changing PA or fruit and vegetable consumption. In this study, we aim to evaluate the effects of computer-tailored print communication (TPC), telephone motivational interviewing (TMI) and a combined version of them in meeting the public health guidelines for PA and fruit and vegetable consumption. We hypothesize that TMI will outweigh TPC, since TMI provides real-time tailoring and interpersonal contact, ingredients assumed to produce a better outcome (at 18 months after baseline) . A detailed description of the study design can be found elsewhere . Our theory-informed interventions promote health behaviour by changing behavioural determinants. Therefore, our second aim is to examine the predictors of guideline adherence in order to determine the success of the intervention. Pedometers are often utilized to increase PA . Our third aim is to examine the effects of pedometers on the adherence to the PA guideline. Altogether, our comparative-effectiveness study contrasting three broad-reach intervention delivery modalities may help in informing the appropriate use of resources to change public lifestyle behaviour. Trial design The study participants were allocated to four groups using stratified computer randomization (Actigraph). One group received four TPC letters, one group received four TMI sessions, one group alternately received two TPC letters and two TMI sessions (combined intervention), and one group received no intervention (control group). After the baseline assessment, treatment allocation concealment was prohibited due to the different nature of the interventions. Investigators were aware of the group assignment, but they had no in-person contact with participants during the provision of interventions. There was also no in-person contact during the self-report assessments, with the exception that some participants were phoned to collect missing data. Intervention effects were assessed by two follow-up written questionnaires (weeks 47 and 73). All letters and questionnaires were mailed to the participants’ home addresses. Two reminders were sent, if needed. Furthermore, two intermediate telephone surveys were conducted. In week 25 (after two intervention exposures), a telephone survey assessed all participants’ behaviours and behavioural determinants to gather up-to-date information for the next computer-tailored intervention and to assess the intermediate effects of the interventions. Participants in the TPC group received an additional telephone survey (week 39) to collect the most recent data on their behaviour and its determinants for the fourth tailored letter. Data entry was done by an external organization (MEMIC-Centre for data entry and management). Participants in the intervention groups received their four intervention components at 5, 13, 30 and 43 weeks after the baseline assessment. Half of the participants in all the intervention groups were randomly selected to receive a pedometer before the third intervention component (week 29); the remainder received this device after the last follow-up. The Medical Ethics Committee of Maastricht University and the University Hospital Maastricht approved the study. Participants Participants ( n = 6420 outpatients) were randomly selected from the database of the Research Network Family Medicine Maastricht (RNFM), which contains systematically collected medical data (demographics, disease, diagnosis, and medication) of all patients from 23 Dutch general practices (GPs), reflecting Dutch primary care practice (Fig. ) . Inclusion criteria were: (1) aged 45–70 years; (2) about 50% diagnosed by their GP as hypertensive according to the International Classification of Primary Care (ICPC code K86 or K87 for hypertension without or with organ damage respectively; https://www.nhg.org/themas/artikelen/icpc-online , accessed 9 September 2020); (3) about 50% male; (4) not participating in other studies according to the GP database; and (5) only one person per address. Hypertension status was included to check whether already having a risk factor for cardiovascular disease (CVD; disease awareness) moderated the effects of the intervention . This is why we selected patients aged 45–70 years. The GPs checked the suitability of the participants selected. Exclusion ( n = 875, 14%) was due to, for example, inability to walk or inability to speak or read Dutch. A total of 5545 people received an invitation letter explaining the study content and randomization procedures. Non-responders ( n = 2341) received a reminder after 4 weeks. Reasons for refusing participation included”lack of interest” or “lack of time”. A total of 2881 people returned the consent form and, thereupon, received a written baseline questionnaire. Those who returned the questionnaire ( n = 2568) received feedback on their lifestyle behaviours and were included in the randomised controlled trial (RCT) ( n = 1629), if they failed to meet at least two of three Dutch public health guidelines (for PA and either fruit or vegetable intake). Participants were recruited and treated in batches, with 18 months elapsing between the first and last batch. Interventions TPC. The TPCs were built on existing theory-based computer-tailored interventions, whose effectiveness have been established in earlier studies e.g., . They were based on the reasoned action approach , social cognitive theory , and insights from stages of change models (i.e., the transtheoretical model) and implementation intentions theory , combined in the I-change Model , as well as on additional qualitative research. Study participants received stage-matched advice . The tailoring variables were age, sex, awareness, attitude (pros and cons), self-efficacy expectations, action plans, stage of change and current behaviour according to the self-report questionnaire. Data on these variables were gathered with our written questionnaires. A computer algorithm connected survey items to a feedback message file in order to provide written individual feedback. The letters on PA, TPC1 and TPC2 (each 3-6 pages) were personalized with the participant’s name and included the following elements: introduction, specific behavioural feedback on targeted behaviour and related social-cognitive determinants, stage-matched advice to change behaviour and conclusions. The subsequent letters on fruit and vegetable consumption, TPC3 (2-4 pages) and TPC4 (4-6 pages) were also personalized and reinforced tailored feedback on behavioural progress and stages of change. We used a structure similar to that in TPC1 and TPC2. TMI. Motivational interviewing is grounded in the similar social–cognitive theories mentioned above, which are translated into specific relational and technical counselling methods . Interview protocols were derived from the Healthy Body Healthy Spirit trial and used to support treatment integrity . Participants could choose the order of the conversation topics in interviews 1 and 3; if PA was preferred in interview 1, fruit and vegetable consumptions were discussed in interview 2, and vice versa. Procedures were performed as follows: giving introduction, assessing current behaviours and progress, discussing the public health guideline, assessing and enhancing motivation and self-efficacy for behaviour change, assessing readiness to change, and summarizing and closing the session. Additional topics could be discussed (e.g., the current situation and progress on action plans in subsequent interviews, the tailored letters (combined group) and values in life). Information on the training for those administering TMI and the raters of the TMI fidelity, both conducted by Master’s level students in Psychology and Health Promotion, has been described elsewhere . Interviewers had MI beginner proficiency. Combined The first letter and interview addressed PA, and the second letter and interview focused on fruit and vegetable consumption. Control Participants received one tailored letter after the last follow-up questionnaire. Pedometer The pedometer was provided with an instructional letter that encouraged participants to gradually increase their number of steps to at least 10,000 a day . Outcome measurement The modified CHAMPS PA questionnaire was used to assess the frequency of an activity (times per week), its duration (hours per week) and intensity (e.g., walking in a leisurely vs. brisk manner) concerning a typical week during the past 4 weeks . The activities measured included cycling in a leisurely or brisk manner and doing light or heavy housekeeping. Metabolic equivalents (METs) were determined for each activity on the basis of the PA compendium by Ainsworth et al. . MET levels were used as cut-offs to calculate the total number of weekly PA hours with at least moderate intensity. Only activities with at least three METs were considered moderate for all participants . Because the modified CHAMPS cannot determine which participants are physically active with moderate intensity for at least five days a week, the summary question from the Short QUestionnaire to ASsess Health-enhancing PA (SQUASH) was added: “How many days a week do you cycle, engage in do-it-yourself activities, do gardening, play a sport or engage in other strenuous physical activities for at least 30 min a day?”. Participants were only coded as meeting the PA guideline if they were physically active with at least moderate intensity for at least 2.5 h a week according to the modified CHAMPS and answered “five or more days” to the SQUASH summary question . The food frequency questionnaire (FFQ) was used to estimate the fruit and vegetable intake . Participants filled out 16 items about the frequency (days per week) and quantity (servings/serving spoons per day) of vegetables (cooked and raw) and fruit (juice, tangerines, other citrus fruits, apples or pears, bananas, and other fruits) concerning a typical week during the past 4 weeks. Frequency and quantity were used to determine daily consumption. Adherence was sufficient if participants consumed at least two servings of fruit a day and at least 200 grams of vegetables a day (four serving spoons) . Covariates in the analyses of intervention effects were sex, hypertension status, age, highest completed level of education, marital status, work situation, native country, presence of diabetes, smoking behaviour, alcohol consumption, family history of CVD, stress, body weight and height to calculate BMI (kg/m 2 ), region of residence, season at completion of baseline questionnaire, and saturated fat intake, as well as (un-)favourable behavioural beliefs, social support, descriptive normative beliefs, self-efficacy expectations, action plans, habit strength, stage of change and awareness (see Table for measurement details). Awareness was based on self-rated behaviour (by asking participants whether they rate, for instance, their intake of vegetables as low or high; 1 = low to 5 = high). This score was compared to the assessment of guideline adherence. Participants were allocated to two awareness levels: overestimators (not meeting the guideline and rating vegetable intake as intermediate to high) and underestimators or realists (other). Sample size At the start of this RCT, the results of similar studies were unavailable. The sample size calculation was based on an expected effect size (Cohen’s d) of 0.3, a power of 0.9, an alpha of 0.01 (multiple testing correction), an intraclass correlation of 0.02 and an average of 70 outpatients per general practice. More details were previously published . Statistical analysis Baseline characteristics of the intervention groups were assessed with SPSS Inc. Released 2006. SPSS for Windows, Version 15.0. Chicago: SPSS Inc. Other analyses were done with MLWiN . Selective dropout Selective dropout was examined (dependent variable, 0 = no; 1 = yes) with mixed logistic regression using PQL estimation. The predictors of dropout used were group, time of measurement, group by time of measurement interactions, and the baseline values of age, gender, hypertension, region, and the level of education. Intervention effectiveness Separate recommended intake levels are given in the Netherlands for fruits and vegetables, as they have been found to differ in consumption circumstance and meals, as well as in their associations with health and disease . Hence, separate analyses are conducted. The effectiveness of intermediate (week 25) and short-term (week 47), as well as follow-up (week 73), were analysed with mixed logistic regression using PQL estimation. These were intention-to-treat analyses, since all available measurements of all randomized participants are analysed without imputation for missing measurements. The mixed model had three levels: GPs, participants, and measurements (baseline and 25, 47 and 73 weeks). GP and participant effects were included as random intercepts. Additionally, the effects of time of measurement, group and time of measurement*group were allowed to vary randomly between GPs (time, group, and time*group) or participants (time), but no significant variance was found. Thus, the reported models had random intercepts only. Socio-demographic variables, lifestyle variables, cognitive behavioural determinants, and baseline measures of the primary outcomes were included as between-subject covariates (except for the baseline behaviour of the outcome at hand, which was included as a repeated measure to allow the inclusion of patients who dropped out after the baseline measurement . To the extent that these covariates are related to the outcome behaviour at hand, including them improves the power and precision of treatment-effect testing and estimation due to reduced residual outcome variance. Having been sent a pedometer during the intervention period was included as a within-subject factor (0 = no; 1 = yes), since it was sent to participants 29 weeks after baseline, which was 1 month after the telephone survey and not yet at baseline. In view of multiple testing, an alpha of 0.01 was used for drawing conclusions about treatment effects. Non-significant covariates (α = 0.10 to prevent type II errors) were excluded from the model, except for hypertension status (because of pre-stratification on hypertension in the randomisation), educational level, age and sex (because of hypotheses or because these variables were used to select participants) . Group, time, group*time, and receiving a pedometer were never excluded, as these were the predictors of interest. Finally, the group effects on the baseline measurement of the outcome were excluded from the final model if no such differences were found (as expected, given randomized treatment assignment), because this increases power and corresponds with treating the baseline measurement as a covariate instead of as a repeated measure . Efficacy of a pedometer on PA guideline adherence The interaction between the intervention group and pedometer was tested only when a significant pedometer effect was found, as well as significant differences between intervention groups with respect to the outcome at follow-up. Missing values and data checking Participants with a missing outcome for one or more time points were included in the analyses without the imputation of missing values, using the direct likelihood approach . Missing values on covariates were replaced if allowed Predictors and covariates were checked for multicollinearity by inspecting their variance inflation factor (VIF). No VIFs above 10 were found, indicating the absence of multicollinearity . The study participants were allocated to four groups using stratified computer randomization (Actigraph). One group received four TPC letters, one group received four TMI sessions, one group alternately received two TPC letters and two TMI sessions (combined intervention), and one group received no intervention (control group). After the baseline assessment, treatment allocation concealment was prohibited due to the different nature of the interventions. Investigators were aware of the group assignment, but they had no in-person contact with participants during the provision of interventions. There was also no in-person contact during the self-report assessments, with the exception that some participants were phoned to collect missing data. Intervention effects were assessed by two follow-up written questionnaires (weeks 47 and 73). All letters and questionnaires were mailed to the participants’ home addresses. Two reminders were sent, if needed. Furthermore, two intermediate telephone surveys were conducted. In week 25 (after two intervention exposures), a telephone survey assessed all participants’ behaviours and behavioural determinants to gather up-to-date information for the next computer-tailored intervention and to assess the intermediate effects of the interventions. Participants in the TPC group received an additional telephone survey (week 39) to collect the most recent data on their behaviour and its determinants for the fourth tailored letter. Data entry was done by an external organization (MEMIC-Centre for data entry and management). Participants in the intervention groups received their four intervention components at 5, 13, 30 and 43 weeks after the baseline assessment. Half of the participants in all the intervention groups were randomly selected to receive a pedometer before the third intervention component (week 29); the remainder received this device after the last follow-up. The Medical Ethics Committee of Maastricht University and the University Hospital Maastricht approved the study. Participants ( n = 6420 outpatients) were randomly selected from the database of the Research Network Family Medicine Maastricht (RNFM), which contains systematically collected medical data (demographics, disease, diagnosis, and medication) of all patients from 23 Dutch general practices (GPs), reflecting Dutch primary care practice (Fig. ) . Inclusion criteria were: (1) aged 45–70 years; (2) about 50% diagnosed by their GP as hypertensive according to the International Classification of Primary Care (ICPC code K86 or K87 for hypertension without or with organ damage respectively; https://www.nhg.org/themas/artikelen/icpc-online , accessed 9 September 2020); (3) about 50% male; (4) not participating in other studies according to the GP database; and (5) only one person per address. Hypertension status was included to check whether already having a risk factor for cardiovascular disease (CVD; disease awareness) moderated the effects of the intervention . This is why we selected patients aged 45–70 years. The GPs checked the suitability of the participants selected. Exclusion ( n = 875, 14%) was due to, for example, inability to walk or inability to speak or read Dutch. A total of 5545 people received an invitation letter explaining the study content and randomization procedures. Non-responders ( n = 2341) received a reminder after 4 weeks. Reasons for refusing participation included”lack of interest” or “lack of time”. A total of 2881 people returned the consent form and, thereupon, received a written baseline questionnaire. Those who returned the questionnaire ( n = 2568) received feedback on their lifestyle behaviours and were included in the randomised controlled trial (RCT) ( n = 1629), if they failed to meet at least two of three Dutch public health guidelines (for PA and either fruit or vegetable intake). Participants were recruited and treated in batches, with 18 months elapsing between the first and last batch. TPC. The TPCs were built on existing theory-based computer-tailored interventions, whose effectiveness have been established in earlier studies e.g., . They were based on the reasoned action approach , social cognitive theory , and insights from stages of change models (i.e., the transtheoretical model) and implementation intentions theory , combined in the I-change Model , as well as on additional qualitative research. Study participants received stage-matched advice . The tailoring variables were age, sex, awareness, attitude (pros and cons), self-efficacy expectations, action plans, stage of change and current behaviour according to the self-report questionnaire. Data on these variables were gathered with our written questionnaires. A computer algorithm connected survey items to a feedback message file in order to provide written individual feedback. The letters on PA, TPC1 and TPC2 (each 3-6 pages) were personalized with the participant’s name and included the following elements: introduction, specific behavioural feedback on targeted behaviour and related social-cognitive determinants, stage-matched advice to change behaviour and conclusions. The subsequent letters on fruit and vegetable consumption, TPC3 (2-4 pages) and TPC4 (4-6 pages) were also personalized and reinforced tailored feedback on behavioural progress and stages of change. We used a structure similar to that in TPC1 and TPC2. TMI. Motivational interviewing is grounded in the similar social–cognitive theories mentioned above, which are translated into specific relational and technical counselling methods . Interview protocols were derived from the Healthy Body Healthy Spirit trial and used to support treatment integrity . Participants could choose the order of the conversation topics in interviews 1 and 3; if PA was preferred in interview 1, fruit and vegetable consumptions were discussed in interview 2, and vice versa. Procedures were performed as follows: giving introduction, assessing current behaviours and progress, discussing the public health guideline, assessing and enhancing motivation and self-efficacy for behaviour change, assessing readiness to change, and summarizing and closing the session. Additional topics could be discussed (e.g., the current situation and progress on action plans in subsequent interviews, the tailored letters (combined group) and values in life). Information on the training for those administering TMI and the raters of the TMI fidelity, both conducted by Master’s level students in Psychology and Health Promotion, has been described elsewhere . Interviewers had MI beginner proficiency. Combined The first letter and interview addressed PA, and the second letter and interview focused on fruit and vegetable consumption. Control Participants received one tailored letter after the last follow-up questionnaire. Pedometer The pedometer was provided with an instructional letter that encouraged participants to gradually increase their number of steps to at least 10,000 a day . The modified CHAMPS PA questionnaire was used to assess the frequency of an activity (times per week), its duration (hours per week) and intensity (e.g., walking in a leisurely vs. brisk manner) concerning a typical week during the past 4 weeks . The activities measured included cycling in a leisurely or brisk manner and doing light or heavy housekeeping. Metabolic equivalents (METs) were determined for each activity on the basis of the PA compendium by Ainsworth et al. . MET levels were used as cut-offs to calculate the total number of weekly PA hours with at least moderate intensity. Only activities with at least three METs were considered moderate for all participants . Because the modified CHAMPS cannot determine which participants are physically active with moderate intensity for at least five days a week, the summary question from the Short QUestionnaire to ASsess Health-enhancing PA (SQUASH) was added: “How many days a week do you cycle, engage in do-it-yourself activities, do gardening, play a sport or engage in other strenuous physical activities for at least 30 min a day?”. Participants were only coded as meeting the PA guideline if they were physically active with at least moderate intensity for at least 2.5 h a week according to the modified CHAMPS and answered “five or more days” to the SQUASH summary question . The food frequency questionnaire (FFQ) was used to estimate the fruit and vegetable intake . Participants filled out 16 items about the frequency (days per week) and quantity (servings/serving spoons per day) of vegetables (cooked and raw) and fruit (juice, tangerines, other citrus fruits, apples or pears, bananas, and other fruits) concerning a typical week during the past 4 weeks. Frequency and quantity were used to determine daily consumption. Adherence was sufficient if participants consumed at least two servings of fruit a day and at least 200 grams of vegetables a day (four serving spoons) . Covariates in the analyses of intervention effects were sex, hypertension status, age, highest completed level of education, marital status, work situation, native country, presence of diabetes, smoking behaviour, alcohol consumption, family history of CVD, stress, body weight and height to calculate BMI (kg/m 2 ), region of residence, season at completion of baseline questionnaire, and saturated fat intake, as well as (un-)favourable behavioural beliefs, social support, descriptive normative beliefs, self-efficacy expectations, action plans, habit strength, stage of change and awareness (see Table for measurement details). Awareness was based on self-rated behaviour (by asking participants whether they rate, for instance, their intake of vegetables as low or high; 1 = low to 5 = high). This score was compared to the assessment of guideline adherence. Participants were allocated to two awareness levels: overestimators (not meeting the guideline and rating vegetable intake as intermediate to high) and underestimators or realists (other). At the start of this RCT, the results of similar studies were unavailable. The sample size calculation was based on an expected effect size (Cohen’s d) of 0.3, a power of 0.9, an alpha of 0.01 (multiple testing correction), an intraclass correlation of 0.02 and an average of 70 outpatients per general practice. More details were previously published . Baseline characteristics of the intervention groups were assessed with SPSS Inc. Released 2006. SPSS for Windows, Version 15.0. Chicago: SPSS Inc. Other analyses were done with MLWiN . Selective dropout Selective dropout was examined (dependent variable, 0 = no; 1 = yes) with mixed logistic regression using PQL estimation. The predictors of dropout used were group, time of measurement, group by time of measurement interactions, and the baseline values of age, gender, hypertension, region, and the level of education. Intervention effectiveness Separate recommended intake levels are given in the Netherlands for fruits and vegetables, as they have been found to differ in consumption circumstance and meals, as well as in their associations with health and disease . Hence, separate analyses are conducted. The effectiveness of intermediate (week 25) and short-term (week 47), as well as follow-up (week 73), were analysed with mixed logistic regression using PQL estimation. These were intention-to-treat analyses, since all available measurements of all randomized participants are analysed without imputation for missing measurements. The mixed model had three levels: GPs, participants, and measurements (baseline and 25, 47 and 73 weeks). GP and participant effects were included as random intercepts. Additionally, the effects of time of measurement, group and time of measurement*group were allowed to vary randomly between GPs (time, group, and time*group) or participants (time), but no significant variance was found. Thus, the reported models had random intercepts only. Socio-demographic variables, lifestyle variables, cognitive behavioural determinants, and baseline measures of the primary outcomes were included as between-subject covariates (except for the baseline behaviour of the outcome at hand, which was included as a repeated measure to allow the inclusion of patients who dropped out after the baseline measurement . To the extent that these covariates are related to the outcome behaviour at hand, including them improves the power and precision of treatment-effect testing and estimation due to reduced residual outcome variance. Having been sent a pedometer during the intervention period was included as a within-subject factor (0 = no; 1 = yes), since it was sent to participants 29 weeks after baseline, which was 1 month after the telephone survey and not yet at baseline. In view of multiple testing, an alpha of 0.01 was used for drawing conclusions about treatment effects. Non-significant covariates (α = 0.10 to prevent type II errors) were excluded from the model, except for hypertension status (because of pre-stratification on hypertension in the randomisation), educational level, age and sex (because of hypotheses or because these variables were used to select participants) . Group, time, group*time, and receiving a pedometer were never excluded, as these were the predictors of interest. Finally, the group effects on the baseline measurement of the outcome were excluded from the final model if no such differences were found (as expected, given randomized treatment assignment), because this increases power and corresponds with treating the baseline measurement as a covariate instead of as a repeated measure . Efficacy of a pedometer on PA guideline adherence The interaction between the intervention group and pedometer was tested only when a significant pedometer effect was found, as well as significant differences between intervention groups with respect to the outcome at follow-up. Missing values and data checking Participants with a missing outcome for one or more time points were included in the analyses without the imputation of missing values, using the direct likelihood approach . Missing values on covariates were replaced if allowed Predictors and covariates were checked for multicollinearity by inspecting their variance inflation factor (VIF). No VIFs above 10 were found, indicating the absence of multicollinearity . Baseline features Table entails the baseline characteristics of the participants. Table and Figs. , , (available online) show the percentages of participants that adhered to a guideline per group and time of measurement. There were no significant differences between the groups at baseline on outcome variables or potential covariates (all p > 0.05). None of the participants met the PA guideline at baseline due to the inclusion criterion, whereas 44% and 31% of the participants adhered to the guideline for fruit and vegetable intake, respectively. The average age was 57.15 years (SD = 7.13), 55% were men, and 52% were classified as hypertensive; 54% had a low educational level, while 23% had an intermediate educational level. Selective dropout Of the 1629 participants, 1509 (93%) finished the intermediate survey, 1201 (74%) completed follow-up 1 and 1186 (73%) completed follow-up 2. In the TPC group, the additional survey (week 39) was completed by 356 participants (88%) (Fig. ). Dropout was found to be unrelated to age, sex, hypertension, or region. There were more dropouts among participants with a low educational level (i.e., less than secondary or vocational education) than among participants with a higher educational level (25% vs. 17%). It should be noted that possible bias due to group and education effects on dropout was adjusted for in the effect analyses by including all dropouts and all predictors of dropout in the analyses of each outcome. Efficacy of TPC, TMI and the combined version Table shows the mixed logistic regression analysis in which the outcome difference between every two groups was estimated at each time point and translated into an odds ratio with a confidence interval. The effects in Table suggest that differences between groups were fairly constant over time points, except for a larger effect of TPC in week 47 (PA and vegetables) and week 73 (fruit). This table, therefore, also reports the pairwise differences based on a model that assumed constancy of differences over time points. Concerning PA guideline adherence, pairwise comparisons revealed that, after baseline, more participants in the TPC, TMI and combined group adhered to the PA guideline than participants in the control group. Although pairwise comparisons in Table indicated that differences between intervention groups were not significant, the following ranking (based on the size of the odds ratio) seemed to apply: combined ≥ TPC ≥ TMI > control (with ‘>’ representing a significant difference and ‘≥’ representing a borderline or no significant difference). For fruit consumption, pairwise comparisons showed that participants in the TPC group were more likely to adhere to the fruit consumption guideline than participants in the control group, and more participants in the TPC group met this guideline than participants in the combined group (Table ). Participants in the TMI group appeared more likely to meet this guideline than participants in the control group (borderline significance, Table ). The following ranking seemed to apply: TPC ≥ TMI ≥ combined ≥ control. Regarding vegetable consumption, pairwise comparisons indicated that more participants in the TPC group adhered to the vegetable consumption guideline than participants in the combined or control group (Table ), with the following ranking: TPC ≥ TMI = combined ≥ control. Examining whether the treatment effects depended on educational level and hypertension status in view of the expected superiority of TMI over TPC for participants with a low educational level and without hypertension , no significant treatment by time by education or treatment by time by hypertension interaction was found. Predictors of guideline adherence Baseline variables that significantly predicted guideline adherence in week 73 (follow-up 2) are reported in Table . Concerning PA, self-efficacy expectations, habit strength and stages of change positively predicted adherence, and participants who filled out the baseline questionnaire in the winter were more likely to adhere than participants who did so in the spring. For fruit consumption, age, self-efficacy expectations, habit strength, number of action plans, stages of change, PA and intake of vegetables were positive predictors of adherence. In addition, participants who filled out the baseline questionnaire in the winter, women and underestimators or realists were more likely to adhere to the fruit guideline than those who filled out the measurement in the spring, men and overestimators. With regard to vegetable intake, self-efficacy expectations, habit strength, number of action plans, stages of change and fruit intake positively predicted adherence, whereas modelling negatively predicted adherence. Women; intermediately and highly educated participants; those who were married or living together; participants who were born outside the Netherlands or who had family history of CVD; underestimators; realists; and non-smokers were more likely to adhere than men; poorly educated participants; participants who were single; divorced or widowed; those who were born in the Netherlands or who had no family history of CVD; overestimators; and smokers. Pedometer effects There were no differences in adherence to the PA guideline ( OR = 0.98, 95% CI = 0.75; 1.28) between participants in the intervention groups who were or were not sent a pedometer. Table entails the baseline characteristics of the participants. Table and Figs. , , (available online) show the percentages of participants that adhered to a guideline per group and time of measurement. There were no significant differences between the groups at baseline on outcome variables or potential covariates (all p > 0.05). None of the participants met the PA guideline at baseline due to the inclusion criterion, whereas 44% and 31% of the participants adhered to the guideline for fruit and vegetable intake, respectively. The average age was 57.15 years (SD = 7.13), 55% were men, and 52% were classified as hypertensive; 54% had a low educational level, while 23% had an intermediate educational level. Of the 1629 participants, 1509 (93%) finished the intermediate survey, 1201 (74%) completed follow-up 1 and 1186 (73%) completed follow-up 2. In the TPC group, the additional survey (week 39) was completed by 356 participants (88%) (Fig. ). Dropout was found to be unrelated to age, sex, hypertension, or region. There were more dropouts among participants with a low educational level (i.e., less than secondary or vocational education) than among participants with a higher educational level (25% vs. 17%). It should be noted that possible bias due to group and education effects on dropout was adjusted for in the effect analyses by including all dropouts and all predictors of dropout in the analyses of each outcome. Table shows the mixed logistic regression analysis in which the outcome difference between every two groups was estimated at each time point and translated into an odds ratio with a confidence interval. The effects in Table suggest that differences between groups were fairly constant over time points, except for a larger effect of TPC in week 47 (PA and vegetables) and week 73 (fruit). This table, therefore, also reports the pairwise differences based on a model that assumed constancy of differences over time points. Concerning PA guideline adherence, pairwise comparisons revealed that, after baseline, more participants in the TPC, TMI and combined group adhered to the PA guideline than participants in the control group. Although pairwise comparisons in Table indicated that differences between intervention groups were not significant, the following ranking (based on the size of the odds ratio) seemed to apply: combined ≥ TPC ≥ TMI > control (with ‘>’ representing a significant difference and ‘≥’ representing a borderline or no significant difference). For fruit consumption, pairwise comparisons showed that participants in the TPC group were more likely to adhere to the fruit consumption guideline than participants in the control group, and more participants in the TPC group met this guideline than participants in the combined group (Table ). Participants in the TMI group appeared more likely to meet this guideline than participants in the control group (borderline significance, Table ). The following ranking seemed to apply: TPC ≥ TMI ≥ combined ≥ control. Regarding vegetable consumption, pairwise comparisons indicated that more participants in the TPC group adhered to the vegetable consumption guideline than participants in the combined or control group (Table ), with the following ranking: TPC ≥ TMI = combined ≥ control. Examining whether the treatment effects depended on educational level and hypertension status in view of the expected superiority of TMI over TPC for participants with a low educational level and without hypertension , no significant treatment by time by education or treatment by time by hypertension interaction was found. Baseline variables that significantly predicted guideline adherence in week 73 (follow-up 2) are reported in Table . Concerning PA, self-efficacy expectations, habit strength and stages of change positively predicted adherence, and participants who filled out the baseline questionnaire in the winter were more likely to adhere than participants who did so in the spring. For fruit consumption, age, self-efficacy expectations, habit strength, number of action plans, stages of change, PA and intake of vegetables were positive predictors of adherence. In addition, participants who filled out the baseline questionnaire in the winter, women and underestimators or realists were more likely to adhere to the fruit guideline than those who filled out the measurement in the spring, men and overestimators. With regard to vegetable intake, self-efficacy expectations, habit strength, number of action plans, stages of change and fruit intake positively predicted adherence, whereas modelling negatively predicted adherence. Women; intermediately and highly educated participants; those who were married or living together; participants who were born outside the Netherlands or who had family history of CVD; underestimators; realists; and non-smokers were more likely to adhere than men; poorly educated participants; participants who were single; divorced or widowed; those who were born in the Netherlands or who had no family history of CVD; overestimators; and smokers. There were no differences in adherence to the PA guideline ( OR = 0.98, 95% CI = 0.75; 1.28) between participants in the intervention groups who were or were not sent a pedometer. This paper described the comparative effects of TPC, TMI and a combined version on adherence to the Dutch public health guidelines for PA, and fruit and vegetable consumption, which were measured with self-report questionnaires. Although TMI was expected to be most successful, TPC, TMI and the combined version were found to be equally effective in increasing the proportion of participants reporting PA guideline adherence. TPC seemed most suited in promoting adherence to both fruit and vegetable consumption guidelines. Previous analyses of our interventions concerning absolute changes in these behaviours indicated that all interventions affected PA and the dietary behaviours equally well . Overall improvements were modest but comparable or better to other studies on multiple risk behaviour interventions addressing guideline adherence , though most assessments so far were done at 12 months at the latest . This is not the first study on lifestyle change describing the relative better effectiveness of print-mediated compared to telephone-mediated programmes . In any case, regarding combined fruit and vegetable consumption, a systematic review also revealed smaller effects of MI interventions compared to other programmes . What could explain the superior effect of TPC in changing nutritional behaviour? TPC mailed to the participants’ home addresses can be kept and re-read, which may be important for behavioural change . Of the study participants who received the letters, about 75% reported to have kept them and 50% to have read them more than once . For TMI participants, recalling what was discussed and decided might be more difficult after the telephone interventions and might have impeded behaviour change. Previous studies have also found that patients may not correctly recall much of the recommendations and information given by their counsellors . Furthermore, with TPC, the detail of the information is pre-set, whereas in TMI, the detail depends on the client’s conversation, causing more variability in the information provided. Maybe, the information provided in TMI was less comprehensive. Also, addressing two nutritional behaviours in one 20-minute session could have hindered a profound discussion, which may have impaired the effectiveness of TMI. Finally, the qualification of the motivational interviewers may also provide an explanation why TMI lagged behind compared to TPC, because in our study interviewers had beginner proficiency—higher competency and more experience is expected to result in better TMI outcomes . By way of conclusion, we found that, TMI helped participants to reach the intended cut-off for PA adequacy. Although, in the current form, it may be less suitable to address fruit or vegetable consumption. As in our study, meta-analyses have reported that the effects of computer tailoring and motivational interviewing mostly manifest themselves in the short or medium term . Adherence rates in the study intervention groups seemed to have stabilized for PA and declined for fruit and vegetables intake at the 73-week follow-up compared to those at the 47-week follow-up. This finding also applied to our absolute behavioural improvement . The type of behaviour may offer an explanation—PA may provide people with more direct physical or psychological reinforcement, which better stimulates PA maintenance (e.g., feelings of vigour or relaxation) compared to that for the intake of fruit and vegetables . We revealed a large increase in the number of participants who met a certain guideline from the baseline to the intermediate measurement (week 25). This could be due to the fact that this measurement was executed by telephone, which may be more subject to social desirability bias than a written questionnaire . A similar increase from baseline to the intermediate telephone survey was also found in the control group. Besides the social desirability aspect, this may also indicate that merely participating in a study that requires completion of self-reported assessments can already induce behaviour change, a finding that has been reported before . The second goal of this study was to investigate predictors of PA and fruit and vegetable consumption. In line with an umbrella systematic literature review, we revealed that none of our baseline sociodemographic variables predicted our PA outcome , although season (winter) might be a variable to account for. Also, others have mentioned the relevance of season . Habit, self-efficacy and stage of change have been found as consistent variables related to PA . For both fruit and vegetable consumption, women were more likely to reach this lifestyle advice. For fruit consumption, age was also positively related to reaching the recommendation. A seasonal influence was found for fruit intake; however, others have reported no such influence . For vegetable consumption, higher educational level, being native Dutch, having a partner and a history of CVD predicted higher intake. We observed no seasonal influence for vegetable intake, although others did reveal such a link . Both fruit and vegetable norm behaviour had identical social cognitive predictors (awareness, self-efficacy, habit strength, number of action plans and stages of change), except for modelling. The limiting role of social modelling was restricted to vegetable intake only. Self-efficacy beliefs and habits have been considered in a review as variables that are consistently related with fruit and vegetable intake . Other variables were also revealed in individual studies, such as the predictive value of action plans concerning fruit intake or stages of change regarding fruit and vegetable consumption . Finally, we examined the efficacy of a pedometer on adherence to the PA guideline. Although using a pedometer may be associated with increased PA , in this study, this device did not affect adherence to the PA guideline. Also, we have not found it to affect absolute change in PA . Current evidence has not provided conclusive proof for its effectiveness as well . Contamination may have led to a type II error. In the Netherlands, people may already possess a pedometer as a result of marketing or free gifts with food products. Moreover, this lack of effect could be explained by the fact that participants were not asked to report steps-data, and therefore, people were less motivated to use it . Put differently, the positive effect of pedometers in research studies may be artificial when participants know that their steps will be evaluated. Limitations Counsellors of the TMI sessions were not blinded. However, the risk of bias was probably low, since our counsellors for the TMI sessions were trained, followed an interview protocol, sessions were recorded and rated by objective assessors and the counsellors had neither inherent allegiance nor conflict of interest with the treatment provided in the study. Also, the blinding of researchers and participants was not feasible in our lifestyle study. Researchers were aware of group assignments, because they were responsible for the logistics of the project. This entailed the organization of the processing, printing and mailing of the tailored letters; the training and monitoring of both the TMI counsellors and TMI coders; the scheduling of telephone sessions; and the organization of the self-reported assessments. During the ongoing study, in-person contact between the researchers and study participants was rare; therefore, we estimated the influence of researchers on the performance of participants as negligible. Our participants were unaware of the trial’s hypothesis, and we were able to conceal their group allocation until after the baseline assessment. Furthermore, our self-reported written assessments, which were completed independently by participants at home, made interference by the researchers unlikely. Besides, data entry was done by an external organization. Nevertheless, empirical studies have shown that if true blinding is lacking, subjective outcomes effect estimates may be exaggerated . Dropout was higher among participants who received a tailored letter (TPC and combined group) compared to that among those who did not (TMI and control group), as well as higher among participants with a low versus intermediate or high educational levels. Although the mixed logistic regression analyses could be biased in the case of non-ignorable dropout (i.e., dropout depending on unmeasured outcome variables, known as MNAR missingness), the analyses were intention-to-treat , including all available data from dropout. Treatment group and educational level were always included as predictors in the outcome analyses, and dropout did not depend on other covariates or measured outcome variables. Thus, at least under the assumption of missingness at random (MAR), the present analyses were unbiased. As in most lifestyle interventions studies , we used self-report measures to assess PA, and fruit and vegetable intake. These measures are similar to the ones used to estimate lifestyle prevalence in communities for national databases. Naturally, such measures have limitation (e.g., they require the participants to have good memories and estimation skills) . In addition, measuring PA in relatively older adults requires extra attention to the time frame used (i.e., they experience more memory difficulties), frequency (i.e., they are active on a more irregular basis) and type of activities that are performed by our study group (i.e., moderate intensity activities are more common in this age group) . Furthermore, overestimation of the measured behaviours is likely . But, for evaluation purposes, the responsiveness to change of an instrument is most relevant. It is known that our food frequency instrument has adequate responsiveness . The modified CHAMPS is a valid and reliable instrument specifically for older adults and has been shown to be sensitive to change as well . We had to add one (SQUASH) item to calculate adherence to the PA guideline. This item came from a validated questionnaire . But because it concerned a summary question, it may have lowered the psychometric quality and may have been more prone to measurement error. This is because such questions may estimate behaviour less precisely than multiple-item questionnaires . Measuring multiple behaviours and their determinants with self-report questionnaires requires considerable time investments from participants, which may have led to annoyance and thus dropouts or invalid results . However, the measurement responses were adequate (93%, 74% and 73% in the intermediate and first and second follow-up measurements, respectively), but some participants only partially completed questionnaires, necessitating a call to them to complete the data collection. Data analysis was conducted by one researcher (HvK) who was aware of group assignment. Capacity limitations did not allow us to appoint two independent data analysts (one being blind to group allocation). To avoid bias caused by the flawed analysis and interpretation of the data, the trial was analysed in accordance with a pre-specified (unchanged) protocol , and detailed documentation was kept for each step of the analysis. These steps were checked and discussed regularly with two members of the research team (IM and GvB). Furthermore, the scientific committee of the grant organization and the co-authors were involved in challenging the outcomes for alternative interpretations. Our trial was funded by a national funding organization (ZonMw), the design was published before the publication of the results, the trial protocol was registered online, and the study was monitored by a medical ethics committee. All these sources of information allowed for confirmation that all primary outcomes were reported in our study publication. Recommendations Following the recommendations on PA and fruit and vegetable intake have been shown to reduce the risk for CVD complications. Because the present study indicates that effects on guideline adherence may differ from absolute change, we recommend that future studies examine intervention effects both on absolute improvement and guideline adherence to choose an intervention with the most impact. Research comparing the effects of TPC and TMI is needed in a longer term measurement (> 12–8 months post-intervention) to assess whether research designed to increase and promote behaviour change maintenance is needed . Based on the findings of this paper, TPC is preferred over TMI or a combined version as the method to promote guideline adherence for fruit and vegetable consumption, whereas all three interventions are recommended to stimulate adherence to the PA guideline. Still, more research is necessary to confirm the advantage of TPC over the treatment modalities for adherence to the guidelines for fruit and vegetable consumption. In addition, participants with lower self-efficacy expectations, who are less motivated to change and have lower habit strength, will need more attention in future interventions to increase their adherence to guidelines for PA and fruit and vegetable consumption. This also applies to overestimators and men with regard to adherence to the fruit and vegetable consumption guidelines. Furthermore, future interventions targeting adherence to these latter guidelines should stimulate participants to formulate action plans. Counsellors of the TMI sessions were not blinded. However, the risk of bias was probably low, since our counsellors for the TMI sessions were trained, followed an interview protocol, sessions were recorded and rated by objective assessors and the counsellors had neither inherent allegiance nor conflict of interest with the treatment provided in the study. Also, the blinding of researchers and participants was not feasible in our lifestyle study. Researchers were aware of group assignments, because they were responsible for the logistics of the project. This entailed the organization of the processing, printing and mailing of the tailored letters; the training and monitoring of both the TMI counsellors and TMI coders; the scheduling of telephone sessions; and the organization of the self-reported assessments. During the ongoing study, in-person contact between the researchers and study participants was rare; therefore, we estimated the influence of researchers on the performance of participants as negligible. Our participants were unaware of the trial’s hypothesis, and we were able to conceal their group allocation until after the baseline assessment. Furthermore, our self-reported written assessments, which were completed independently by participants at home, made interference by the researchers unlikely. Besides, data entry was done by an external organization. Nevertheless, empirical studies have shown that if true blinding is lacking, subjective outcomes effect estimates may be exaggerated . Dropout was higher among participants who received a tailored letter (TPC and combined group) compared to that among those who did not (TMI and control group), as well as higher among participants with a low versus intermediate or high educational levels. Although the mixed logistic regression analyses could be biased in the case of non-ignorable dropout (i.e., dropout depending on unmeasured outcome variables, known as MNAR missingness), the analyses were intention-to-treat , including all available data from dropout. Treatment group and educational level were always included as predictors in the outcome analyses, and dropout did not depend on other covariates or measured outcome variables. Thus, at least under the assumption of missingness at random (MAR), the present analyses were unbiased. As in most lifestyle interventions studies , we used self-report measures to assess PA, and fruit and vegetable intake. These measures are similar to the ones used to estimate lifestyle prevalence in communities for national databases. Naturally, such measures have limitation (e.g., they require the participants to have good memories and estimation skills) . In addition, measuring PA in relatively older adults requires extra attention to the time frame used (i.e., they experience more memory difficulties), frequency (i.e., they are active on a more irregular basis) and type of activities that are performed by our study group (i.e., moderate intensity activities are more common in this age group) . Furthermore, overestimation of the measured behaviours is likely . But, for evaluation purposes, the responsiveness to change of an instrument is most relevant. It is known that our food frequency instrument has adequate responsiveness . The modified CHAMPS is a valid and reliable instrument specifically for older adults and has been shown to be sensitive to change as well . We had to add one (SQUASH) item to calculate adherence to the PA guideline. This item came from a validated questionnaire . But because it concerned a summary question, it may have lowered the psychometric quality and may have been more prone to measurement error. This is because such questions may estimate behaviour less precisely than multiple-item questionnaires . Measuring multiple behaviours and their determinants with self-report questionnaires requires considerable time investments from participants, which may have led to annoyance and thus dropouts or invalid results . However, the measurement responses were adequate (93%, 74% and 73% in the intermediate and first and second follow-up measurements, respectively), but some participants only partially completed questionnaires, necessitating a call to them to complete the data collection. Data analysis was conducted by one researcher (HvK) who was aware of group assignment. Capacity limitations did not allow us to appoint two independent data analysts (one being blind to group allocation). To avoid bias caused by the flawed analysis and interpretation of the data, the trial was analysed in accordance with a pre-specified (unchanged) protocol , and detailed documentation was kept for each step of the analysis. These steps were checked and discussed regularly with two members of the research team (IM and GvB). Furthermore, the scientific committee of the grant organization and the co-authors were involved in challenging the outcomes for alternative interpretations. Our trial was funded by a national funding organization (ZonMw), the design was published before the publication of the results, the trial protocol was registered online, and the study was monitored by a medical ethics committee. All these sources of information allowed for confirmation that all primary outcomes were reported in our study publication. Following the recommendations on PA and fruit and vegetable intake have been shown to reduce the risk for CVD complications. Because the present study indicates that effects on guideline adherence may differ from absolute change, we recommend that future studies examine intervention effects both on absolute improvement and guideline adherence to choose an intervention with the most impact. Research comparing the effects of TPC and TMI is needed in a longer term measurement (> 12–8 months post-intervention) to assess whether research designed to increase and promote behaviour change maintenance is needed . Based on the findings of this paper, TPC is preferred over TMI or a combined version as the method to promote guideline adherence for fruit and vegetable consumption, whereas all three interventions are recommended to stimulate adherence to the PA guideline. Still, more research is necessary to confirm the advantage of TPC over the treatment modalities for adherence to the guidelines for fruit and vegetable consumption. In addition, participants with lower self-efficacy expectations, who are less motivated to change and have lower habit strength, will need more attention in future interventions to increase their adherence to guidelines for PA and fruit and vegetable consumption. This also applies to overestimators and men with regard to adherence to the fruit and vegetable consumption guidelines. Furthermore, future interventions targeting adherence to these latter guidelines should stimulate participants to formulate action plans.
Effect of Addition of Metal Oxide Nanoparticles on the Strength of Heat-Cured Denture Base Resins: Protocol for Systematic Review and Meta-Analysis of In Vitro Studies
e18b1f01-1bbf-4182-bd1a-897ca47e66e3
11464938
Dentistry[mh]
Acrylic resins used in dentistry are composed of compounds that are natural or artificial in origin and have several repeating structural units or monomers that form the macromolecules or polymers . Although there have been recent technical advances in denture base materials (DBMs), such as computer-aided design and computer-aided manufacturing, heat-cured polymethyl methacrylate (PMMA) continues to remain as one of the most widely used DBMs . Even with its prevalent use, it continues to have poor mechanical properties (eg, strength). Heat-cured dentures are known to fracture under repeated occlusal and functional loads . For optimum clinical usage and longevity, denture base resins should withstand masticatory forces and, thus, must have good mechanical properties (eg, strength). Overall, dentures are subjected to a combination of tensile, compressive, and shear forces and are susceptible to sudden drop, which may result in denture base fracture . Furthermore, DBMs should have high flexural strength to withstand the mastication forces, showing no deformation or fracture . To improve the mechanical properties of PMMA, reinforcement with nanoparticles has been shown to have promising results. Nanoparticles affect the mechanical properties of materials due to their potential to create new and strong bonds and make them more reactive when compared with macro- or microparticles . With the recent advances in nanotechnology, various metal oxide nanoparticles, such as copper oxide (CuO), titanium dioxide (TiO 2 ), zirconium dioxide (ZrO 2 ), zinc oxide (ZnO), silicon dioxide (SiO 2 ), and others, have been developed to enhance the properties and clinical performance of denture base resins . Previous research has shown that the properties of nanocomposite can be influenced by factors, such as the type, size, shape, concentration of nanoparticles, and interaction of the nanoparticles with the polymer matrix . Although few systematic reviews have attempted to analyze the effect of the addition of nanoparticles on the mechanical properties , they were limited to a single type of metal oxide nanoparticle. Furthermore, while the effect of metal oxide nanoparticles on certain aspects of strength, such as flexural and impact strength, has been reviewed, collective evidence of different types of clinically relevant strengths remains unclear. Therefore, the primary objective of this review is to systematically synthesize the evidence to compare and analyze the effects of the addition of metal oxide nanoparticles on the strength (flexural, impact, transverse, compressive tensile strength, and fracture toughness) of heat-cured PMMA denture resins. The secondary objective is to determine the optimum size and concentration of nanoparticles to provide improved strength of PMMA. In addition, this review also aims to analyze methodological factors, adherence to testing and sample-making guidelines, and surface treatments of nanoparticles on the strength of heat-cured denture base resins. The findings of this review will serve as recommendations for the optimum use of metal oxide nanoparticles to improve the mechanical properties of heat-cured PMMA. Ethical Considerations This study is a review of existing in vitro studies, not involving human subjects, and thus an institutional ethical clearance was not necessary. Protocol Registration To have robust methodology and transparency in reporting, the protocol of the review has been registered in the Open Science Framework (OSF) . Open Science Framework is an open and free platform that supports research by helping in protocol registration and collaborations. This study will be conducted per the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines for systematic reviews and meta-analyses . Review Question This systematic review and meta-analysis attempts to answer the following research questions: (1) What is the effect of different metal oxide nanoparticles on the strength (impact, compressive, flexural, tensile, transverse strength, and fracture toughness) of heat-cured denture base resins? (2) What is the optimum size and concentration of addition of metal oxide nanoparticles to achieve improved impact, compressive, flexural, tensile, transverse strengths, and fracture toughness of heat-cured denture base resins? and (3) Do methodological factors, such as the treatment of nanoparticles and following testing and sample fabrication guidelines, influence the strength of denture base resins? Eligibility Criteria The eligibility criteria of studies for inclusion are defined using the PICOS (Population, Intervention, Comparison, Outcomes, and Study design) criteria . In , population (P) denotes PMMA heat-cured denture base resins. Studies using veined, autoclaved, or microwave-cured denture base resins will be excluded. Intervention (I) denotes the addition of any metal oxide nanoparticle in PMMA only by weight. Studies based on nanotubes, fillers, fibers, coating, or hybrid nanoparticles in PMMA will be excluded. Studies that do not mention weight or volume, units of measurement, and method of mixing or dispersion of the nanoparticles will be excluded. Control (C) denotes heat-cured PMMA without any addition of nanoparticles. Outcome measure (O) denotes the strength of heat-cured PMMA. Strength was interpreted as all important parameters of strength that can affect the denture base resins, which include flexural, impact, transverse, compressive, tensile strength, and fracture toughness. Studies in which the dimensions of the samples tested were not mentioned, or if testing was done on dentures and studies in which the units of measurement were not mentioned, will be excluded. Study designs (S) denote in vitro studies published in English from January 2012 to 2023 will be considered. Thus, in vivo studies, case reports, systematic and narrative reviews, letters to the editor, short commentaries, pilot studies, or studies with preliminary results will be excluded. Information Sources and Search Methods for Identification of Studies The search strategy was formulated for each database by an experienced librarian (CP) and was sent for peer reviewing to another qualified librarian using PRESS (Peer Review of Electronic Strategies) guidelines . Electronic searches in 5 databases will be performed: PubMed (NCBI), Scopus (Elsevier), Cochrane Library (Wiley), CINAHL Plus with Full Text (EBSCO), and Dimensions Free Web App. Reporting of the search methods shall be done using the PRISMA-S (Preferred Reporting Items for Systematic Reviews and Meta-Analyses search extension), wherein searches will be done in databases, study registries, gray literature, and other online sources. Other online sources that will also be searched include Google Scholar, ResearchGate, and one source for gray literature (OpenGrey). Citation chasing and manual citation searching will be done by 2 researchers (AP and AT). The details of the search strategies used can be found in . Screening and Selection of Studies After using the formulated search strategy in all databases and other online sources, papers or records will be imported to Rayyan (Rayyan Systems Inc) for screening. Deduplication will be performed using the same software. Two researchers (PK and AH) will independently screen the titles and abstracts of the exported papers sequentially. Full-text papers of potentially eligible studies will be retrieved to determine their final inclusion and data extraction . If papers are to be eliminated, the reasons for elimination will be documented. In case of disagreements, conflicts will be resolved; however, for papers where conflicts cannot be resolved, 2 reviewers (AP and AT) will be consulted for the final decision . Process of Data Extraction Two researchers (AP and AT) will analyze each of the included papers and extract the relevant data elements. To ensure precision, error-free, and complete extraction of data elements, extraction will be done independently. The extracted data will be tallied, and disparities will be resolved by discussions with the other 2 reviewers (PK and AH). Corresponding authors of papers will be contacted for more information or missing information in the published record if required. Elements of Data Extraction Elements of data are summarized in . Data not stated shall be reported as “ NS,” and unclear data shall be reported as “ unclear.” A pilot test of the 2 researchers shall be done to ensure uniformity in understanding and procedure. Data elements to be extracted for the data summarization and analysis. Headings and subheadings Study details Study title Study author Year of publishing Details of the added metal oxide nanoparticle Type of nanoparticle used Size of nanoparticle used Concentration of nanoparticles used Effect of metal oxide nanoparticle morphology on the strength Methods of sample fabrication Treatment of nanoparticles Method of dispersion of nanoparticles Details of used acrylic used Method of acrylization Method of sample finishing Storage of samples before testing Testing and sample details Testing or sample fabrication guidelines followed Sample size used Sample dimensions Details of testing mechanism Results and conclusion Mean and SD of the strength tested Conclusions Methodological and Risk-of-Bias Assessment The assessment of the quality of the included papers will be performed using the modified CONSORT (Consolidated Standards of Reporting Trials) guidelines . The risk of bias will be assessed individually by 2 researchers (PK and AH), and any disagreements will be resolved by the third reviewer (AP) based on the modified Cochrane Risk of Bias tool; scoring will be done as described in a previous study . Data Extraction The extracted data will be presented in tabular form. The table will report the extracted variables as listed above in the Elements of Data Extracted section. Following data extraction, narrative as well as quantitative analyses will be performed. Data Analysis and Summarization The results of all finally included studies will be described succinctly in the form of a summary table. A random-effects model will be used to calculate pooled estimates, as this is a more robust estimate regardless of heterogeneity . Cochran Q and I 2 statistics will be used as indicators of heterogeneity. The pooled estimates of the primary end points will be calculated as the weighted mean differences with 95% CIs using the Comprehensive Meta-analysis Package (CMA version 3.0). Sensitivity analysis will be conducted to identify studies that may have severely affected the pooled estimates. Exploratory subgroup analyses by different moderator variables will also be conducted to examine sources of heterogeneity. A funnel plot and Egger linear regression test will be used to assess publication bias . The significant level will be set as 2-sided and P <.05. Forest plots will be used to present the data. This study is a review of existing in vitro studies, not involving human subjects, and thus an institutional ethical clearance was not necessary. To have robust methodology and transparency in reporting, the protocol of the review has been registered in the Open Science Framework (OSF) . Open Science Framework is an open and free platform that supports research by helping in protocol registration and collaborations. This study will be conducted per the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines for systematic reviews and meta-analyses . This systematic review and meta-analysis attempts to answer the following research questions: (1) What is the effect of different metal oxide nanoparticles on the strength (impact, compressive, flexural, tensile, transverse strength, and fracture toughness) of heat-cured denture base resins? (2) What is the optimum size and concentration of addition of metal oxide nanoparticles to achieve improved impact, compressive, flexural, tensile, transverse strengths, and fracture toughness of heat-cured denture base resins? and (3) Do methodological factors, such as the treatment of nanoparticles and following testing and sample fabrication guidelines, influence the strength of denture base resins? The eligibility criteria of studies for inclusion are defined using the PICOS (Population, Intervention, Comparison, Outcomes, and Study design) criteria . In , population (P) denotes PMMA heat-cured denture base resins. Studies using veined, autoclaved, or microwave-cured denture base resins will be excluded. Intervention (I) denotes the addition of any metal oxide nanoparticle in PMMA only by weight. Studies based on nanotubes, fillers, fibers, coating, or hybrid nanoparticles in PMMA will be excluded. Studies that do not mention weight or volume, units of measurement, and method of mixing or dispersion of the nanoparticles will be excluded. Control (C) denotes heat-cured PMMA without any addition of nanoparticles. Outcome measure (O) denotes the strength of heat-cured PMMA. Strength was interpreted as all important parameters of strength that can affect the denture base resins, which include flexural, impact, transverse, compressive, tensile strength, and fracture toughness. Studies in which the dimensions of the samples tested were not mentioned, or if testing was done on dentures and studies in which the units of measurement were not mentioned, will be excluded. Study designs (S) denote in vitro studies published in English from January 2012 to 2023 will be considered. Thus, in vivo studies, case reports, systematic and narrative reviews, letters to the editor, short commentaries, pilot studies, or studies with preliminary results will be excluded. The search strategy was formulated for each database by an experienced librarian (CP) and was sent for peer reviewing to another qualified librarian using PRESS (Peer Review of Electronic Strategies) guidelines . Electronic searches in 5 databases will be performed: PubMed (NCBI), Scopus (Elsevier), Cochrane Library (Wiley), CINAHL Plus with Full Text (EBSCO), and Dimensions Free Web App. Reporting of the search methods shall be done using the PRISMA-S (Preferred Reporting Items for Systematic Reviews and Meta-Analyses search extension), wherein searches will be done in databases, study registries, gray literature, and other online sources. Other online sources that will also be searched include Google Scholar, ResearchGate, and one source for gray literature (OpenGrey). Citation chasing and manual citation searching will be done by 2 researchers (AP and AT). The details of the search strategies used can be found in . After using the formulated search strategy in all databases and other online sources, papers or records will be imported to Rayyan (Rayyan Systems Inc) for screening. Deduplication will be performed using the same software. Two researchers (PK and AH) will independently screen the titles and abstracts of the exported papers sequentially. Full-text papers of potentially eligible studies will be retrieved to determine their final inclusion and data extraction . If papers are to be eliminated, the reasons for elimination will be documented. In case of disagreements, conflicts will be resolved; however, for papers where conflicts cannot be resolved, 2 reviewers (AP and AT) will be consulted for the final decision . Two researchers (AP and AT) will analyze each of the included papers and extract the relevant data elements. To ensure precision, error-free, and complete extraction of data elements, extraction will be done independently. The extracted data will be tallied, and disparities will be resolved by discussions with the other 2 reviewers (PK and AH). Corresponding authors of papers will be contacted for more information or missing information in the published record if required. Elements of data are summarized in . Data not stated shall be reported as “ NS,” and unclear data shall be reported as “ unclear.” A pilot test of the 2 researchers shall be done to ensure uniformity in understanding and procedure. Data elements to be extracted for the data summarization and analysis. Headings and subheadings Study details Study title Study author Year of publishing Details of the added metal oxide nanoparticle Type of nanoparticle used Size of nanoparticle used Concentration of nanoparticles used Effect of metal oxide nanoparticle morphology on the strength Methods of sample fabrication Treatment of nanoparticles Method of dispersion of nanoparticles Details of used acrylic used Method of acrylization Method of sample finishing Storage of samples before testing Testing and sample details Testing or sample fabrication guidelines followed Sample size used Sample dimensions Details of testing mechanism Results and conclusion Mean and SD of the strength tested Conclusions The assessment of the quality of the included papers will be performed using the modified CONSORT (Consolidated Standards of Reporting Trials) guidelines . The risk of bias will be assessed individually by 2 researchers (PK and AH), and any disagreements will be resolved by the third reviewer (AP) based on the modified Cochrane Risk of Bias tool; scoring will be done as described in a previous study . The extracted data will be presented in tabular form. The table will report the extracted variables as listed above in the Elements of Data Extracted section. Following data extraction, narrative as well as quantitative analyses will be performed. The results of all finally included studies will be described succinctly in the form of a summary table. A random-effects model will be used to calculate pooled estimates, as this is a more robust estimate regardless of heterogeneity . Cochran Q and I 2 statistics will be used as indicators of heterogeneity. The pooled estimates of the primary end points will be calculated as the weighted mean differences with 95% CIs using the Comprehensive Meta-analysis Package (CMA version 3.0). Sensitivity analysis will be conducted to identify studies that may have severely affected the pooled estimates. Exploratory subgroup analyses by different moderator variables will also be conducted to examine sources of heterogeneity. A funnel plot and Egger linear regression test will be used to assess publication bias . The significant level will be set as 2-sided and P <.05. Forest plots will be used to present the data. After running the search in the 5 databases, a total of 1837 papers were found. Of these, 1752 were found to be duplicates, leaving 85 potential records that will be screened for titles and abstracts, and analyzed based on the inclusion and exclusion criteria. A similar search was conducted on other online sources, resulting in the identification of 129 papers that will be further analyzed based on inclusion and exclusion criteria. Citation chasing will be done on the finally selected papers. These numbers may slightly vary, once we update our search. The study was initiated in November 2023, where the research questions were clearly defined and search strategies were formulated. The proposed study is expected to be completed by December 2024. The results of this systematic review and meta-analysis will be disseminated to the academic community through possible avenues, such as scientific conferences and publication in a peer-reviewed journal. A full timeline of the systematic review process is shown below in . Principal Findings The proposed systematic review will provide collective evidence that evaluates the effects of the addition of metal oxide nanoparticles on the different types of strength of heat-cured PMMA. It further attempts to analyze different methodological factors that can affect the strength of metal oxide nanoparticle–reinforced, heat-cured PMMA. Previous research has shown that even though metal oxide nanoparticles can improve the mechanical properties of PMMA, they do so at certain concentrations and particle sizes . Therefore, this systematic review further aims to analyze the effect of different sizes of metal oxide nanoparticles that can affect the strength. Similarly, the concentration of the nanoadditive plays a significant role in the mechanical properties, as with increased percentages, there are agglomerates formed that adversely affect the properties of the nanocomposite . Other methodological factors, such as surface treatment of the nanoparticles, have also been shown to affect the properties of the nanocomposite. The use of a silane coupling agent improves the bonds between the matrix and the filler . Surface treatments have also been shown to lower the surface energy of the nanoparticles, thereby preventing agglomeration or cluster formation . On similar lines, following standard guidelines for testing or sample fabrication can play a crucial role in determining the robust methodology followed in the study as well as affect the results obtained. Over the years, an optimized methodology has been followed by standard organizations across countries . With the advances in nanotechnology, there are several types of metal oxide nanoparticles being used and tested for their effects. To the best of the authors’ knowledge, no systematic review has attempted to evaluate and compare the effect of these different metal oxide nanoparticles on the different strengths (flexural, impact, transverse, compressive, tensile strength, and fracture toughness) of the heat-cured PMMA. We anticipate determining the optimum metal oxide nanoparticle addition conditions to heat-cured PMMA to achieve the most favorable mechanical properties about strength. The results of the review can be used to fabricate heat-cured dentures reinforced with metal oxide nanoparticles with improved mechanical properties, subsequently improving the longevity of the dentures reducing fractures and deformation. The results of this review would also generate relevant consideration for future research to improve the quality of DBM used and continued improved quality of life for patients with dentures. Strengths and Limitations The strength of the review is in the rigorous methodology that will be followed. To enhance the likelihood of locating relevant papers on this topic, a medical librarian was involved in developing a thorough search strategy. Second, robust inclusion and exclusion criteria will be followed, which will also ensure scientific rigor in answering the specific research question. Finally, all variables that can affect the strength of nanoparticle-reinforced PMMA will be analyzed either qualitatively or quantitively. This systematic review will have some limitations that should be taken into consideration. The studies chosen for this systematic review and meta-analysis may exhibit a degree of heterogeneity due to variations in the acrylic used and the strength testing techniques. This review is limited to heat-cured PMMA resins only and does not consider other types of resins such as light-cured, autopolymerizing resin; 3D printed resins; and computer-aided design and computer-aided manufacturing denture materials. This review is restricted to pure metal oxide nanoparticles and does not consider hybrid nanoparticles. This review will not include studies published in languages other than English. Although an exhaustive literature search was performed, there may be a possibility of missing literature pertinent to the research question, due to the vast measure of articles published on this topic. Conclusions This systematic review will analyze the impact of the incorporation of metal oxide nanoparticles on the different types of strength of heat-cured denture base resins. This review shall determine the optimal size and concentration of metal oxide nanoparticles to enhance the strength of heat-cured denture base resins. This study may also provide insights into the optimal methodology for adding metal oxide nanoparticles, such as following guidelines and surface treatments of the nanoparticles, to enhance the strength of heat-cured denture base resins. The proposed systematic review will provide collective evidence that evaluates the effects of the addition of metal oxide nanoparticles on the different types of strength of heat-cured PMMA. It further attempts to analyze different methodological factors that can affect the strength of metal oxide nanoparticle–reinforced, heat-cured PMMA. Previous research has shown that even though metal oxide nanoparticles can improve the mechanical properties of PMMA, they do so at certain concentrations and particle sizes . Therefore, this systematic review further aims to analyze the effect of different sizes of metal oxide nanoparticles that can affect the strength. Similarly, the concentration of the nanoadditive plays a significant role in the mechanical properties, as with increased percentages, there are agglomerates formed that adversely affect the properties of the nanocomposite . Other methodological factors, such as surface treatment of the nanoparticles, have also been shown to affect the properties of the nanocomposite. The use of a silane coupling agent improves the bonds between the matrix and the filler . Surface treatments have also been shown to lower the surface energy of the nanoparticles, thereby preventing agglomeration or cluster formation . On similar lines, following standard guidelines for testing or sample fabrication can play a crucial role in determining the robust methodology followed in the study as well as affect the results obtained. Over the years, an optimized methodology has been followed by standard organizations across countries . With the advances in nanotechnology, there are several types of metal oxide nanoparticles being used and tested for their effects. To the best of the authors’ knowledge, no systematic review has attempted to evaluate and compare the effect of these different metal oxide nanoparticles on the different strengths (flexural, impact, transverse, compressive, tensile strength, and fracture toughness) of the heat-cured PMMA. We anticipate determining the optimum metal oxide nanoparticle addition conditions to heat-cured PMMA to achieve the most favorable mechanical properties about strength. The results of the review can be used to fabricate heat-cured dentures reinforced with metal oxide nanoparticles with improved mechanical properties, subsequently improving the longevity of the dentures reducing fractures and deformation. The results of this review would also generate relevant consideration for future research to improve the quality of DBM used and continued improved quality of life for patients with dentures. The strength of the review is in the rigorous methodology that will be followed. To enhance the likelihood of locating relevant papers on this topic, a medical librarian was involved in developing a thorough search strategy. Second, robust inclusion and exclusion criteria will be followed, which will also ensure scientific rigor in answering the specific research question. Finally, all variables that can affect the strength of nanoparticle-reinforced PMMA will be analyzed either qualitatively or quantitively. This systematic review will have some limitations that should be taken into consideration. The studies chosen for this systematic review and meta-analysis may exhibit a degree of heterogeneity due to variations in the acrylic used and the strength testing techniques. This review is limited to heat-cured PMMA resins only and does not consider other types of resins such as light-cured, autopolymerizing resin; 3D printed resins; and computer-aided design and computer-aided manufacturing denture materials. This review is restricted to pure metal oxide nanoparticles and does not consider hybrid nanoparticles. This review will not include studies published in languages other than English. Although an exhaustive literature search was performed, there may be a possibility of missing literature pertinent to the research question, due to the vast measure of articles published on this topic. This systematic review will analyze the impact of the incorporation of metal oxide nanoparticles on the different types of strength of heat-cured denture base resins. This review shall determine the optimal size and concentration of metal oxide nanoparticles to enhance the strength of heat-cured denture base resins. This study may also provide insights into the optimal methodology for adding metal oxide nanoparticles, such as following guidelines and surface treatments of the nanoparticles, to enhance the strength of heat-cured denture base resins.
In vitro comparison of
d1734128-b336-4b41-8fef-76ff9921bf39
11658299
Dentistry[mh]
The quality of root canal cleaning and shaping, and elimination of microorganisms from the root canal system are important parameters in success of root canal therapy. Chemomechanical preparation of the root canal system supplemented by the application of intracanal medicaments decreases the intracanal microbial load; however, it cannot eradicate all the microorganisms, and some resistant species may remain viable in the root canal system. Additional microorganisms may also leak into the root canal and colonize the root canal space . Thus, application of sealers with antimicrobial properties has been recommended to prevent subsequent proliferation of residual microorganisms in the root canal system and reinfection . Enterococcus faecalis ( E. faecalis ), a Gram-positive facultative anaerobe, is the most common microorganism responsible for endodontic re-infections, and has been isolated from 24 to 77% of the infected root canals that require endodontic retreatment due to apical periodontitis . Endodontic re-infection with E. faecalis often occurs due to saliva contamination of the root canal system during or after the primary endodontic treatment, using contaminated endodontic instruments, or less commonly through a carious lesion . E. faecalis is not clinically important in primary endodontic infections because its proliferation is inhibited by other bacteria. However, in case of endodontic treatment failure, decreased microbial load of the root canal system enhances the proliferation of E. faecalis and subsequent colonization of the root canal space . Therefore, although a hermetic seal is the primary criterion for a successful root canal therapy , optimal antimicrobial activity of sealers is also imperative to prevent or limit the proliferation of residual microbial species in the root canal system . E. faecalis is a resistant microorganism that can survive adverse environmental conditions and limited nutrient supply. It penetrates deep into dentinal tubules by up to 500 μm depth. Therefore, it cannot be easily eradicated by the regular chemomechanical preparation of the root canal system , highlighting the need for sealers with optimal antibacterial activity against this particular microorganism . Due to its high resistance, E. faecalis is commonly used as the gold standard for evaluation of antimicrobial activity of sealers . AH Plus (AHP) is a commonly used epoxy resin-based endodontic sealer known for its optimal antimicrobial properties. It has favorable mechanical properties, adequate radiopacity, strong adhesion to dentin, minimal solubility, optimal dimensional stability, favorable flowability, and high resistance, which contribute to its widespread use as an endodontic sealer . However, it has been claimed that the antimicrobial activity of AHP may decrease over time, and it may lose most of its antimicrobial activity after the first 24 h . Endoseal MTA (ESM) is an injectable hydraulic paste used for permanent root canal obturation. It is a premixed and ready-to-use sealer that sets entirely by itself in presence of moisture. It is mainly composed of calcium silicate . It is insoluble in water and tissue fluids, radiopaque, and aluminum-free. Its chemical composition is similar to that of mineral trioxide aggregate (MTA). Thus, it is expected to show optimal physical and biological properties . ESM sealer has also shown optimal antimicrobial properties against both Gram-positive and Gram-negative bacteria . Cold ceramic (CC) sealer is a MTA-like root end filling material which has gained recent popularity due to its optimal sealing ability and biocompatibility. It is mainly composed of calcium oxide, silicon oxide, barium oxide, and sulfur trioxide . Previous studies showed superior sealing ability of CC compared to amalgam and glass ionomer cement . Also, a recent study indicated its superior sealing ability compared with MTA in blood-contaminated conditions and comparable performance in other environments. Due to its favorable sealing ability, the CC sealer can effectively seal the root canal system and resist microleakage, contributing to its potential antimicrobial activity by creating a barrier against bacterial penetration . Considering the significance of antimicrobial activity of endodontic sealers, and lack of comprehensive information about the antimicrobial activity of the CC sealer, this study aimed to compare the survival of E. faecalis in dentinal tubules following root canal therapy with AHP, ESM, and CC sealers. The null hypothesis of the study was that no significant difference would be found in antibacterial activity of AHP, ESM, and CC sealers against E. faecalis. This in vitro, experimental study was conducted on 80 single-rooted and single-canal human maxillary central and lateral incisors and canine teeth obtained from the Oral Surgery Department of Yazd Shahid Sadoughi University of Medical Sciences. The teeth had been extracted for purposes not related to this study, such as periodontal disease and irreparable coronal caries. All teeth were anonymized before being used in this study. The study protocol was approved by the ethics committee of Yazd Shahid Sadoughi University of Medical Sciences (IR.SSU.DENTISTRY.REC.1401.073). Sample size The sample size was calculated to be 20 in each group assuming alpha = 0.05, beta = 0.2, study power of 80%, and mean of log10 colony count to be 0.53 in the test group and 1.31 in the control group with a standard deviation of 1 using the formula for the comparison of two means (mean colony count). Eligibility criteria The inclusion criteria were extracted human maxillary anterior teeth with one single root and one straight canal, mature apex, minimal coronal caries, no previous endodontic treatment, no evident accessory canal, no dystrophic calcification, and no cracks, fracture or root resorption. The teeth were radiographed from the sagittal view to ensure meeting the eligibility criteria. Tooth preparation Soft tissue residues were removed by immersion of the teeth in 5% sodium hypochlorite (NaOCl) for 24 h. The teeth were then decoronated at the cementoenamel junction by using a diamond disc (NTI Serrated Diamond Disc, Kerr) operating at 3000 rpm under water coolant. A #10 K-file (Mani, Tochigi, Japan) was used to determine the working length. The file was introduced into the canal, and the working length was determined by subtraction of 1 mm from the file length when its tip was visible at the apex. All root canals were mechanically instrumented up to F3 of ProTaper system. The root canals were irrigated with 10 cc of 2.5% NaOCl for chemical preparation during the instrumentation process. Smear layer was removed by rinsing the root canals with 5 cc of 5.25% NaOCl for 2 min, followed by 10 cc of 17% EDTA for 4 min and 5 cc of 5.25% NaOCl for 2 min, and a final rinse with saline. The root surfaces were then coated with two layers of nail varnish except for the apical 2 mm. This was done to seal the external root surface. The roots were then autoclave-sterilized at 121 °C and 15 psi pressure for 30 min. Experiment Standard-strain E. faecalis (ATCC 29212) was cultured on blood agar supplemented with 5% sheep blood overnight. The purity of bacterial culture was ensured by Gram staining, catalase test, esculin hydrolysis in bile esculin agar medium, and growth in presence of 6.5% NaCl. Bacterial suspension was prepared by inoculating several pure colonies of E. faecalis into 50 mL sterile TSB medium (Merck, Darmstadt, Germany) in an Erlenmeyer flask, and the turbidity of the suspension was compared with standard 0.5 McFarland tube until a turbidity equivalent to 0.5 McFarland (1.5 × 10 8 CFUs/mL) was obtained. The roots were immersed in bacterial suspension and incubated at 35 °C for 3 weeks. During this period, 20 mL of TSB was removed from flask and replaced with 20 mL of sterile TSB medium. The roots were then randomly assigned to four groups ( n = 20) by simple randomization including three experimental groups and one control group. The root canals in the experimental groups were obturated with 2% gutta-percha (Meta Biomed, Korea) and AHP (Dentsply DE Trey, Konstanz, Germany), ESM (Maruchi; Wonju, Korea), and CC (Yazd, Iran) sealers by the cold lateral compaction technique . The sealers were prepared and applied as instructed by the manufacturers. The roots in the control group were randomly divided into two subgroups ( n = 10) of positive control and negative control, to ensure the accuracy of the experiment. The roots in the positive control subgroup were inoculated with the bacteria but were not obturated. The roots in the negative control subgroup were evaluated without bacterial inoculation and obturation. The roots in each group were separately placed in tryptic soy broth medium, and after 30 days of incubation at 35 °C, the exposed apical part of the roots was coated with sticky wax, and the external root surfaces were completely cleaned with 95% ethanol. Under sterile conditions, a 4-mm section was made at the mid-root by a diamond disc operating at 3000 rpm . The root filling material was removed by a plugger and a #2 Gates-Glidden drill. The superficial layer of intracanal dentin was shaved by using a #5 Gates-Glidden drill and weighed by a scale (Mettler Toledo) with 100 µg accuracy for the purpose of standardization of the samples . The shaved dentin powder was transferred into a microtube containing 1 ml of tryptic soy broth and vortexed. Next, 100 µL of the mixture was diluted 1:10 and cultured on blood agar plate (Merck, Darmstadt, Germany) supplemented with 5% sheep blood and incubated at 35 °C for 24 h . After incubation, the number of grown colonies was visually counted (Fig. ), and the colony count was recorded in colony forming units per milliliter (CFUs/mL) by multiplying the value by 100. To ensure bacterial purity, the aforementioned tests were repeated. Statistical analysis Data were analyzed using SPSS version 22 (SPSS Inc., IL, USA). Due to non-normal distribution of data as shown by the Kolmogorov-Smirnov test, comparisons were made by the non-parametric Kruskal-Wallis test at 0.05 level of significance. The sample size was calculated to be 20 in each group assuming alpha = 0.05, beta = 0.2, study power of 80%, and mean of log10 colony count to be 0.53 in the test group and 1.31 in the control group with a standard deviation of 1 using the formula for the comparison of two means (mean colony count). The inclusion criteria were extracted human maxillary anterior teeth with one single root and one straight canal, mature apex, minimal coronal caries, no previous endodontic treatment, no evident accessory canal, no dystrophic calcification, and no cracks, fracture or root resorption. The teeth were radiographed from the sagittal view to ensure meeting the eligibility criteria. Soft tissue residues were removed by immersion of the teeth in 5% sodium hypochlorite (NaOCl) for 24 h. The teeth were then decoronated at the cementoenamel junction by using a diamond disc (NTI Serrated Diamond Disc, Kerr) operating at 3000 rpm under water coolant. A #10 K-file (Mani, Tochigi, Japan) was used to determine the working length. The file was introduced into the canal, and the working length was determined by subtraction of 1 mm from the file length when its tip was visible at the apex. All root canals were mechanically instrumented up to F3 of ProTaper system. The root canals were irrigated with 10 cc of 2.5% NaOCl for chemical preparation during the instrumentation process. Smear layer was removed by rinsing the root canals with 5 cc of 5.25% NaOCl for 2 min, followed by 10 cc of 17% EDTA for 4 min and 5 cc of 5.25% NaOCl for 2 min, and a final rinse with saline. The root surfaces were then coated with two layers of nail varnish except for the apical 2 mm. This was done to seal the external root surface. The roots were then autoclave-sterilized at 121 °C and 15 psi pressure for 30 min. Standard-strain E. faecalis (ATCC 29212) was cultured on blood agar supplemented with 5% sheep blood overnight. The purity of bacterial culture was ensured by Gram staining, catalase test, esculin hydrolysis in bile esculin agar medium, and growth in presence of 6.5% NaCl. Bacterial suspension was prepared by inoculating several pure colonies of E. faecalis into 50 mL sterile TSB medium (Merck, Darmstadt, Germany) in an Erlenmeyer flask, and the turbidity of the suspension was compared with standard 0.5 McFarland tube until a turbidity equivalent to 0.5 McFarland (1.5 × 10 8 CFUs/mL) was obtained. The roots were immersed in bacterial suspension and incubated at 35 °C for 3 weeks. During this period, 20 mL of TSB was removed from flask and replaced with 20 mL of sterile TSB medium. The roots were then randomly assigned to four groups ( n = 20) by simple randomization including three experimental groups and one control group. The root canals in the experimental groups were obturated with 2% gutta-percha (Meta Biomed, Korea) and AHP (Dentsply DE Trey, Konstanz, Germany), ESM (Maruchi; Wonju, Korea), and CC (Yazd, Iran) sealers by the cold lateral compaction technique . The sealers were prepared and applied as instructed by the manufacturers. The roots in the control group were randomly divided into two subgroups ( n = 10) of positive control and negative control, to ensure the accuracy of the experiment. The roots in the positive control subgroup were inoculated with the bacteria but were not obturated. The roots in the negative control subgroup were evaluated without bacterial inoculation and obturation. The roots in each group were separately placed in tryptic soy broth medium, and after 30 days of incubation at 35 °C, the exposed apical part of the roots was coated with sticky wax, and the external root surfaces were completely cleaned with 95% ethanol. Under sterile conditions, a 4-mm section was made at the mid-root by a diamond disc operating at 3000 rpm . The root filling material was removed by a plugger and a #2 Gates-Glidden drill. The superficial layer of intracanal dentin was shaved by using a #5 Gates-Glidden drill and weighed by a scale (Mettler Toledo) with 100 µg accuracy for the purpose of standardization of the samples . The shaved dentin powder was transferred into a microtube containing 1 ml of tryptic soy broth and vortexed. Next, 100 µL of the mixture was diluted 1:10 and cultured on blood agar plate (Merck, Darmstadt, Germany) supplemented with 5% sheep blood and incubated at 35 °C for 24 h . After incubation, the number of grown colonies was visually counted (Fig. ), and the colony count was recorded in colony forming units per milliliter (CFUs/mL) by multiplying the value by 100. To ensure bacterial purity, the aforementioned tests were repeated. Data were analyzed using SPSS version 22 (SPSS Inc., IL, USA). Due to non-normal distribution of data as shown by the Kolmogorov-Smirnov test, comparisons were made by the non-parametric Kruskal-Wallis test at 0.05 level of significance. Table presents the measures of central dispersion for the colony count in the study groups. As indicated, the positive control subgroup showed the highest colony count, and the negative control subgroup showed the lowest colony count, confirming the accuracy of the sterilization process and the adopted methodology. Comparison of the colony count among the three experimental groups revealed no significant difference at 30 days ( P = 0.102). This study compared the survival of E. faecalis in dentinal tubules following root canal therapy with AHP, ESM, and CC sealers. The null hypothesis of the study was that no significant difference would be found in antibacterial activity of AHP, ESM, and CC sealers against E. faecalis . The methodology of this study was adopted from previous studies . Also, selection of a section from the mid-root was because of the fact that Harrison et al. showed the highest E. faecalis penetration depth in the middle third of the root canal (106–222 μm) after a 3-week incubation period. The present results revealed no significant difference in colony count among the three sealer groups at 30 days, indicating their comparable antibacterial activity against E. faecalis . Thus, the null hypothesis of the study was accepted. The optimal antimicrobial activity of AHP has been previously confirmed in the literature , and is believed to be due to the release of formaldehyde and bis-phenol A during its polymerization process . AHP is the improved version of AH26, and has a significantly lower rate of formaldehyde release (which is genotoxic and cytotoxic) compared with AH26 . Moazemi et al. reported higher antimicrobial activity of AH26 compared with AHP due to higher formaldehyde release from the former. As mentioned earlier, the antimicrobial activity of AHP sealer may also be attributed to the release of bisphenol A glycidyl since this compound can also cause gene mutations in bacterial and mammal cells and yeasts . Due to the relatively recent introduction of ESM and CC sealers to the market , studies on their antimicrobial activity are limited in number. ESM is a premixed sealer containing calcium silicate, calcium aluminate, calcium aluminoferrite, calcium sulfate, opacifier, and thickening agent, and has a setting time of 12 min. The antimicrobial activity of this sealer is due to its alkaline pH (approximately 9), release of calcium hydroxide, and high content of oxide elements. Zirconium oxide, sodium oxide, magnesium oxide, aluminum oxide, sulfur dioxide, and ferric oxide in the composition of this sealer cause bacterial membrane damage, and increase the membrane permeability to molecules such as calcium hydroxide, allowing their influx into the bacterial cytoplasm . The CC sealer is a powder-gel sealer and 93% of its ingredients include calcium oxide, silicon oxide, barium oxide, and sulfur trioxide that form larnite, barite, and calcium silicate following reaction with water. It has a final setting time of 24 h, and increases the pH to 10.11, and 10.84 after 2 h and 7 days, respectively, creating a stable alkaline environment . Gholamhoseini et al. reported that ESM had no antimicrobial activity against E. faecalis . Their results were different from the present findings, which may be due to the fact that they assessed the antimicrobial activity using the agar diffusion test, which is different from the method used in the present study, and has limitations such as chemical interactions between the culture medium and antimicrobial agents, which affect the accuracy of the results. This test was recently replaced with the direct contact test, which can better reveal the potential antimicrobial activity of endodontic sealers . Mak et al. showed that AHP and freshly mixed ESM were successful in elimination of the planktonic form of all intracanal microorganisms; this effect even lasted for 7 days after setting for ESM. Although many studies that adopted the direct contact test reported high antibacterial activity of sealers against E. faecalis , the results obtained by this method in vitro cannot be generalized to the clinical setting since it totally overlooks the dental structure and invasion of bacteria deep into dentinal tubules . The dentinal block model used in the present study enables the assessment of bacterial viability by up to 300 μm depth in dentinal tubules . Saleh et al. adopted this method in their study and showed that AHP caused complete eradication of E. faecalis at 300 μm tubular depth after 7 days. They also demonstrated that although calcium hydroxide decreased the bacterial colony count, it could not effectively eradicate the bacteria. They did not assess the antimicrobial activity of bioceramic sealers in their study. The present study adopted the same methodology and obtained somehow similar results. Bioceramic sealers create an alkaline environment and can show antimicrobial activity comparable to that of calcium hydroxide . Perestigaard et al. used the same methodology and demonstrated that all tested materials had antimicrobial activity for one day; however, their antimicrobial activity decreased with time. They noted that AHP caused the highest reduction in colony count on day 1 while the reduction in colony count was comparably high in AHP, MTA, and calcium hydroxide groups on day 7. The present results revealed that both resin-based and bioceramic sealers were effective against E. faecalis at 30 days; however, the mean colony count was slightly lower in the AHP group. One shortcoming of the dentin block methodology adopted in the present study is small sample size . Thus, the difference in colony count between the two sealers could become significant if a larger sample size had been tested, which should be addressed in future studies. Lower colony count in the AHP group may be attributed to deeper penetration of resin sealers into dentinal tubules. Literature is controversial regarding the penetration depth of sealers. However, Lin et al. , in a systematic review and meta-analysis concluded that the penetration depth of epoxy resin sealers is generally higher than bioceramic sealers. Schmidt et al. discussed that deep sealer penetration into dentinal tubules would be hard to achieve, irrespective of technique, and reported higher penetration depth of AHP compared with bioceramic sealers, irrespective of the sealer particle size. Although the buffering capacity of dentin in the main canal would be insignificant due to large volume of bioceramic material, this effect would be more prominent in dentinal tubules due to small volume of sealer . Yoo et al. inspected the dentinal tubules infected with E. faecalis under a scanning electron microscope and revealed that E. faecalis was viable adjacent to the growing MTA crystals within the dentinal tubules at 4 weeks. At 16 weeks, however, the bacterial cell wall was in the process of degradation which was simultaneous with the completion of tubular obstruction. Therefore, higher colony count in the two bioceramic sealer groups in the present study may be attributed to their decreased alkaline activity in dentinal tubules next to the proton pump of E. faecalis cell wall. It should be noted that the reduction in colony count was numerically greater in the CC group than the ESM group in the present study (although statistically insignificant). To explain this difference, it should be kept in mind that the CC sealer is supplied in the form of powder and gel while ESM is premixed. Premixed sealers have a lower tricalcium silicate content since they have a non-aqueous carrier; therefore, they would cause a smaller pH rise and would have a subsequently lower antimicrobial activity . E. faecalis can lower its mechanical activity and alter its morphology and cellular physiology to remain viable but nonculturable to survive under stressful environmental conditions such as nutrient deprivation, low temperature, and presence of antimicrobial agents and antibiotics. Upon return to normal conditions, E. faecalis can resuscitate through methods such as quorum sensing . The cellular culture-based methods are still the gold standard for assessment of bacterial viability ; nonetheless, they are incapable of detecting viable but nonculturable bacteria since a culture medium alone cannot often cause their resuscitation , and the intensity and duration of environmental stress also affect the resuscitation process . In other words, cell culture in a culture medium clearly indicates the presence of viable bacteria but absence of grown colonies does not necessarily indicate absence of viable E. faecalis . Therefore, lower colony count in the AHP group cannot necessarily indicate its higher antibacterial activity against E. faecalis. To overcome this shortcoming, it has been suggested to assess the continuity of bacterial cell membrane as an acceptable and independent parameter of growth and proliferation , which is performed by using BacLight™ bacterial viability staining and observation under a confocal laser scanning microscopy (CLSM). This method is based on two fluorescent markers that enter all cells or bacteria with damaged cell wall . Wang et al. used CLSM and showed equal antimicrobial activity of AHP and BC sealers against E. faecalis in dentinal tubules after 30 days. Their results were in agreement with the present findings despite adopting a different methodology. E. faecalis can change its morphology to develop resistance against factors that inhibit cell wall synthesis or compromise its integrity, such as penicillin, cephalosporines, and carbapenems, and lead to development of chronic, refractory infections. This phase was named the L-form by Klieneberger et al. . However, considering the efficacy of bioceramic materials in cell wall degradation, the level of sensitivity of CLSM for assessment of E. faecalis cell death remains questionable, and needs to be elucidated in future studies. Finally, it should be born in mind that since none of the currently available antimicrobial sealers can eliminate the entire root canal microorganisms, a hermetic coronal and apical seal still remains an inseparable part of vital and non-vital pulp treatments. In vitro design, relatively small sample size, and using microbial culture for assessment of bacterial viability were among the limitations of this study. Future studies with a larger sample size are required to adopt more advanced techniques for detection of viable bacteria, such as quantitative reverse transcription polymerase chain reaction. Within the limitations of this in vitro study, the results indicated comparable antibacterial activity of AHP, ESM, and CC sealers for elimination of E. faecalis from dentinal tubules after 30 days.
The diagnostic value of metagenomic next-generation sequencing for identifying
9d302af4-dfa4-48a6-bb6e-e2732da66a92
6549306
Pediatrics[mh]
Metagenomic Next-generation sequencing (mNGS) had high sensitivity and specificity for S. pneumoniae identification in CSF specimens. mNGS was less affected than culture by the use of antibiotics before CSF collection. The number of unique reads of S. pneumoniae detected by mNGS was related to the duration from onset to the sample collection time. Bacterial Meningitis (BM) is an infection of the meninges and subarachnoid space that can sometimes present as an infection of the brain cortex and parenchyma . BM had an annual incidence of 16 million people worldwide in 2013, of which 1.6 million had sequelaes . BM causes significant morbidity and mortality in both developed and developing countries [ – ]. The fatality rate of Streptococcus pneumoniae ( S. pneumoniae ) meningitis can reach 20–37% in high-income countries and up to 51% in low-income countries . Neurological sequelae are common among survivors [ – ]. Therefore, prompt pathogen diagnosis and accurate antibiotic treatment are essential to improve the prognosis of BM. With the application of the meningococcal vaccine, S. pneumoniae conjugate vaccine and Hib vaccine, the incidence of BM caused by these pathogens has decreased in developed countries . However, the pneumococcal conjugate vaccine is not universally used in China. A multicentre study of BM in Chinese children showed that S. pneumoniae was still the most common pathogen, occurring in up to 46.5% of cases . The conventional pathogen detection method (culture) may take a long time (3 to 5 days) and have a low positive rate. Metagenomic next-generation sequencing (mNGS) is a high-throughput sequencing method that can directly detect the nucleic acids of pathogens in clinical specimens and then analyse the nucleic acid sequences by bioinformatics methods. As a novel diagnostic tool, mNGS has been used for the identification of various pathogens such as bacteria, viruses, fungi and parasites from clinical samples (tissues, CSF or plasma) in an unbiased, simultaneous and direct manner [ – ]. However, at present, there are few studies on pathogen detection by mNGS in bacterial meningitis. Our team previously identified pathogens from cerebrospinal fluid specimens in children with bacterial meningitis by mNGS. The results showed that mNGS is of great value for the identification of pathogens in cerebrospinal fluid (CSF), and S. pneumoniae is still the most common pathogen . However, there is currently no research on the diagnostic value of mNGS for a single pathogen. Hence, this study aimed to investigate the diagnostic value of mNGS for identifying S. pneumoniae in children with BM. Study population and specimen collection All BM patients aged 29 days to 18 years were admitted to the Department of Infectious Diseases at Beijing Children’s Hospital from October 23, 2014, to December 31, 2016, and December 1, 2017, to July 31, 2018. The diagnosis criteria for BM were consistent with the World Health Organization (WHO), as follows: (1) acute fever (> 38.5 C rectal or > 38.0 C axillary); (2) headache, meningeal irritation, or altered consciousness; (3) at least one of the following in the cerebrospinal fluid: leukocytes > 100 cells/mm or leukocytes 10–100 cells/mm with elevated protein (> 100 mg/dL) or decreased glucose (< 40 mg/dL); and (4) positive culture, positive Gram stain, or positive bacterial antigen in the CSF. A case meeting diagnostic criteria 1, 2, and 3 at the same time was considered a probable case. A probable case meeting criterion 4 was considered a confirmed case. Exclusion criteria were as follows: cerebrospinal fluid < 1 ml and bloody CSF. The clinical data of all included cases including demographic characteristics, antibiotic use, specimen collection and clinical microbiology tests results [culture and Alere BinaxNow® Streptococcus pneumoniae Antigen test (Alere, USA)] were recorded. One millilitre of CSF was collected for mNGS when the lumbar puncture was performed after admission. This study was approved by the Ethics Committee of Beijing Children’s Hospital affiliated to Capital Medical University (No. 2017–74). Written informed consent was obtained from the patient’s parents or other legal representatives. DNA extraction DNA was extracted directly from the 300ul CSF sample (each patient and negative “no-template” control) using the TIANamp Micro DNA Kit (DP316, Tiangen Biotech, Beijing, China). The sample was added proteinase K (10 ml) and 300 ml buffer GB (with carrier RNA) and then incubated at 56 °C (10 min). After 300 ul cold absolute ethyl alcohol was added and the tube was incubated at room temperature (5 min). Transferring the liquid to a new adsorption column and the liquid was washed with buffer GD and buffer PW. The DNA was dissolved in 40ul of Tris–ethylenediaminetetraacetic acid buffer. Library generation, and sequencing The extracted DNA was sonicated with a Bioruptor Pico device to generate 200–300 bp fragments. According to the standard protocol of the BGISEQ-500 sequencing platform (BGI-Tianjin, Tianjin, China), DNA libraries were constructed through end repaired, adapter added overnight and polymerase chain reaction amplification to the extracted DNA. Quality control was carried out using a bioanalyser (Agilent 2100, Agilent Technologies, Santa Clara, CA, USA) combined with quantitative PCR to measure the adapters before sequencing. DNA sequencing was then performed using the BGISEQ-500 platform (BGI-Tianjin, Tianjin, China) . Data processing and analysis High quality sequencing data was generated after removal of short (< 35 bp) reads, low quality and low complexity reads. The readings were then mapped to the human reference genome (hg19 and YH sequences) using the Burrows-Wheeler Aligner . The remaining data were aligned with the NCBI microbial genome database ( ftp://ftp.ncbi.nlm.nih.gov/genomes/ ), which included the genome sequences of 3446 bacterial species (104 species of Mycobacterium tuberculosis and 45 species of mycoplasma/chlamydia), 1515 viral species, 206 fungal species and 140 parasites connected to human diseases. The mapped data were used for further analysis. The depth and coverage of each species was calculated using Soap Coverage on the SOAP website ( http://soap.genomics.org.cn/ ). Quantitative real-time PCR (qPCR) validation We performed S. pneumonia qPCR to validate the mNGS results. The DNA was extracted using the QIAamp cador Pathogen Mini Kit (Qiagen, 54,106). qPCR was carried out on the ViiATM 7 real-time PCR system using cycling conditions comprising 2 min at 50 °C and 10 min at 95 °C followed by 45 two-step cycles of 15 min at 95 °C and 1 min at 60 °C. The sequences of the primers and probes are as follows: F: ACGCAATCTAGCAGATGAAGCA, R: TCGTGCGTTTTAATTCCAGCT and P: FAM-AACGCTTGATACAGGGAG-MGB . Statistical methods Continuous variables were expressed as the mean ± standard deviation or as the median. Two groups were compared using the independent t-test for parametric data and the Mann-Whitney U test for non-parametric data. Continuous variables with P -values < 0.05 were considered statistically significant, and all tests were 2-tailed. Based on the extracted data, a 2 × 2 contingency table was used to determine sensitivity, specificity, PPV, and NPV. All of the statistical analyses were conducted using SPSS 23.0 software (SPSS Inc. USA). All BM patients aged 29 days to 18 years were admitted to the Department of Infectious Diseases at Beijing Children’s Hospital from October 23, 2014, to December 31, 2016, and December 1, 2017, to July 31, 2018. The diagnosis criteria for BM were consistent with the World Health Organization (WHO), as follows: (1) acute fever (> 38.5 C rectal or > 38.0 C axillary); (2) headache, meningeal irritation, or altered consciousness; (3) at least one of the following in the cerebrospinal fluid: leukocytes > 100 cells/mm or leukocytes 10–100 cells/mm with elevated protein (> 100 mg/dL) or decreased glucose (< 40 mg/dL); and (4) positive culture, positive Gram stain, or positive bacterial antigen in the CSF. A case meeting diagnostic criteria 1, 2, and 3 at the same time was considered a probable case. A probable case meeting criterion 4 was considered a confirmed case. Exclusion criteria were as follows: cerebrospinal fluid < 1 ml and bloody CSF. The clinical data of all included cases including demographic characteristics, antibiotic use, specimen collection and clinical microbiology tests results [culture and Alere BinaxNow® Streptococcus pneumoniae Antigen test (Alere, USA)] were recorded. One millilitre of CSF was collected for mNGS when the lumbar puncture was performed after admission. This study was approved by the Ethics Committee of Beijing Children’s Hospital affiliated to Capital Medical University (No. 2017–74). Written informed consent was obtained from the patient’s parents or other legal representatives. DNA was extracted directly from the 300ul CSF sample (each patient and negative “no-template” control) using the TIANamp Micro DNA Kit (DP316, Tiangen Biotech, Beijing, China). The sample was added proteinase K (10 ml) and 300 ml buffer GB (with carrier RNA) and then incubated at 56 °C (10 min). After 300 ul cold absolute ethyl alcohol was added and the tube was incubated at room temperature (5 min). Transferring the liquid to a new adsorption column and the liquid was washed with buffer GD and buffer PW. The DNA was dissolved in 40ul of Tris–ethylenediaminetetraacetic acid buffer. The extracted DNA was sonicated with a Bioruptor Pico device to generate 200–300 bp fragments. According to the standard protocol of the BGISEQ-500 sequencing platform (BGI-Tianjin, Tianjin, China), DNA libraries were constructed through end repaired, adapter added overnight and polymerase chain reaction amplification to the extracted DNA. Quality control was carried out using a bioanalyser (Agilent 2100, Agilent Technologies, Santa Clara, CA, USA) combined with quantitative PCR to measure the adapters before sequencing. DNA sequencing was then performed using the BGISEQ-500 platform (BGI-Tianjin, Tianjin, China) . High quality sequencing data was generated after removal of short (< 35 bp) reads, low quality and low complexity reads. The readings were then mapped to the human reference genome (hg19 and YH sequences) using the Burrows-Wheeler Aligner . The remaining data were aligned with the NCBI microbial genome database ( ftp://ftp.ncbi.nlm.nih.gov/genomes/ ), which included the genome sequences of 3446 bacterial species (104 species of Mycobacterium tuberculosis and 45 species of mycoplasma/chlamydia), 1515 viral species, 206 fungal species and 140 parasites connected to human diseases. The mapped data were used for further analysis. The depth and coverage of each species was calculated using Soap Coverage on the SOAP website ( http://soap.genomics.org.cn/ ). We performed S. pneumonia qPCR to validate the mNGS results. The DNA was extracted using the QIAamp cador Pathogen Mini Kit (Qiagen, 54,106). qPCR was carried out on the ViiATM 7 real-time PCR system using cycling conditions comprising 2 min at 50 °C and 10 min at 95 °C followed by 45 two-step cycles of 15 min at 95 °C and 1 min at 60 °C. The sequences of the primers and probes are as follows: F: ACGCAATCTAGCAGATGAAGCA, R: TCGTGCGTTTTAATTCCAGCT and P: FAM-AACGCTTGATACAGGGAG-MGB . Continuous variables were expressed as the mean ± standard deviation or as the median. Two groups were compared using the independent t-test for parametric data and the Mann-Whitney U test for non-parametric data. Continuous variables with P -values < 0.05 were considered statistically significant, and all tests were 2-tailed. Based on the extracted data, a 2 × 2 contingency table was used to determine sensitivity, specificity, PPV, and NPV. All of the statistical analyses were conducted using SPSS 23.0 software (SPSS Inc. USA). Patient demographics and microbiology results A total of 135 children with BM were included in this study. S. pneumoniae was identified in 43/135 (31.9%) patients by all microbiology tests. A total of 26/43 (60.5%) cases were male, and the median age was 11.5 (8.5, 48.9) months. A total of 37/135 (27.4%) cases were identified as S. pneumonia infection by clinical microbiology tests (culture and/or Alere BinaxNow® Streptococcus pneumoniae Antigen test). S. pneumonia was identified by blood and/or CSF culture in 26/135 (19.3%) cases. A total of 35/135 (25.9%) patients had positive results on the Alere BinaxNow® Streptococcus pneumoniae Antigen test. S. pneumoniae was identified in 32/135 (23.7%) patient CSF specimens by mNGS. Six CSF samples were identified as S. pneumoniae only by mNGS. mNGS information and the related influencing factors Among 32 patients who were diagnosed with S. pneumoniae meningitis by mNGS, the number of unique reads of S. pneumonia ranged from 4 to 341,303. The coverage of S. pneumonia ranged from 0.011 to 86.68%, with a depth value of 1–36.57 (Table ). The collection time of CSF specimens for mNGS was 4 to 121 days after disease onset, and the median collection time was 14 days. The difference in the number of unique reads of S. pneumoniae in the different groups (CSF collection time < 14 days and > 14 days from onset) was statistically significant (170.5 vs. 13, P = 0.019) (Fig. ). All 32 mNGS-positive children were treated with antibiotics prior to mNGS sample collection. Based on the number of unique reads of S. pneumoniae , 32 patients were divided into two groups. The unique reads ≤100 group comprised 17 (41.4%) patients, while the unique reads > 100 group comprised 15 patients. Comparing the two groups, patients in the unique reads > 100 group had a generally shorter duration from disease onset to sample collection than patients in the mNGS unique reads ≤100 group (11 days vs. 20 days, P = 0.027). There was a significant difference in the simultaneous CSF white blood cell count and protein level between the two groups (10 × 10 9 /L vs. 32 × 10 9 /L, P = 0.001; 1280 mg/L vs. 855 mg/L, P = 0.016) (Table ). Comparison of mNGS and clinical microbiology methods The median collection time of CSF specimens for culture that had positive S. pneumoniae results was 4 (3, 5) days. The median collection time of CSF specimens for mNGS that had positive S. pneumoniae results was 14 (8, 23) days from onset. The difference in the collected time of CSF for culture and mNGS was statistically significant (4 days vs. 14 days, P < 0.001). Among the 135 BM patients, mNGS and clinical microbiology tests (culture and/or Alere BinaxNow® Streptococcus pneumoniae Antigen test) were both positive for S. pneumonia in 26 (19.3%) samples and were both negative for S. pneumonia in 92 (68.1%) samples. The accordance rate of mNGS and clinical microbiology tests (culture and/or Alere BinaxNow® Streptococcus pneumoniae Antigen test) for the identification of S. pneumoniae was 87.4% (118/135). The comparison between mNGS and clinical microbiology tests (culture and Alere BinaxNow® Streptococcus pneumoniae Antigen test) was shown in Table . Taking culture as the gold standard, the sensitivity and specificity of S. pneumoniae identification by mNGS were73.1 and 88.1%, respectively. The PPV and NPV of diagnosing S. pneumoniae meningitis by mNGS were59.4 and 93.2%, respectively. Comparing mNGS and the combined tests (culture and Alere BinaxNow® Streptococcus pneumoniae Antigen test), the sensitivity and specificity of mNGS for S. pneumoniae identification were70.3 and 93.9%, respectively, and the PPV and NPV for the identification of S. pneumoniae by mNGS were 81.4 and 89.3%, respectively. In total, we performed qPCR validation for 15 CSF specimens, while insufficient CSF sample was available for qPCR validation in the other cases. These 15 CSF specimens were collected for mNGS and qPCR at the same time. The qPCR results were positive in 9 (9/15, 60%) cases [(mNGS unique reads: 429 (25–341,303)] and negative in 6 (6/15, 40%) cases. The latter 6 cases had fewer unique reads of S. pneumonia, and the number of unique reads was 4, 4, 6, 7, 20 and 27. The clinical microbiology tests (culture and/or Alere BinaxNow® Streptococcus pneumoniae Antigen test) were positive for S. pneumonia in 4 (4/6) cases (unique reads: 4, 6, 7 and 20). One (1/6) case (unique reads: 4) was negative for S. pneumonia in the clinical microbiology tests (blood and/or CSF culture and Alere BinaxNow® Streptococcus pneumoniae Antigen test). Another case (1/6) (P28, unique reads: 27) was positive for Staphylococcus aureus ( S. aureus ) in the blood culture. However, the result of specific PCR from CSF specimens was negative for both S. pneumoniae and S. aureus in P28. In addition, we identified multiple pathogens in the CSF sample of P14 by mNGS, including Escherichia coli ( E. coli ) (unique reads 30) and S. pneumoniae (unique reads 4). In terms of clinical microbiology, P14 had a positive result for S. pneumoniae in the CSF culture and a positive Alere BinaxNow® Streptococcus pneumoniae Antigen test in the CSF. However, the Sanger validation of the CSF sample was negative for both S. pneumoniae and E. coli. A total of 135 children with BM were included in this study. S. pneumoniae was identified in 43/135 (31.9%) patients by all microbiology tests. A total of 26/43 (60.5%) cases were male, and the median age was 11.5 (8.5, 48.9) months. A total of 37/135 (27.4%) cases were identified as S. pneumonia infection by clinical microbiology tests (culture and/or Alere BinaxNow® Streptococcus pneumoniae Antigen test). S. pneumonia was identified by blood and/or CSF culture in 26/135 (19.3%) cases. A total of 35/135 (25.9%) patients had positive results on the Alere BinaxNow® Streptococcus pneumoniae Antigen test. S. pneumoniae was identified in 32/135 (23.7%) patient CSF specimens by mNGS. Six CSF samples were identified as S. pneumoniae only by mNGS. Among 32 patients who were diagnosed with S. pneumoniae meningitis by mNGS, the number of unique reads of S. pneumonia ranged from 4 to 341,303. The coverage of S. pneumonia ranged from 0.011 to 86.68%, with a depth value of 1–36.57 (Table ). The collection time of CSF specimens for mNGS was 4 to 121 days after disease onset, and the median collection time was 14 days. The difference in the number of unique reads of S. pneumoniae in the different groups (CSF collection time < 14 days and > 14 days from onset) was statistically significant (170.5 vs. 13, P = 0.019) (Fig. ). All 32 mNGS-positive children were treated with antibiotics prior to mNGS sample collection. Based on the number of unique reads of S. pneumoniae , 32 patients were divided into two groups. The unique reads ≤100 group comprised 17 (41.4%) patients, while the unique reads > 100 group comprised 15 patients. Comparing the two groups, patients in the unique reads > 100 group had a generally shorter duration from disease onset to sample collection than patients in the mNGS unique reads ≤100 group (11 days vs. 20 days, P = 0.027). There was a significant difference in the simultaneous CSF white blood cell count and protein level between the two groups (10 × 10 9 /L vs. 32 × 10 9 /L, P = 0.001; 1280 mg/L vs. 855 mg/L, P = 0.016) (Table ). The median collection time of CSF specimens for culture that had positive S. pneumoniae results was 4 (3, 5) days. The median collection time of CSF specimens for mNGS that had positive S. pneumoniae results was 14 (8, 23) days from onset. The difference in the collected time of CSF for culture and mNGS was statistically significant (4 days vs. 14 days, P < 0.001). Among the 135 BM patients, mNGS and clinical microbiology tests (culture and/or Alere BinaxNow® Streptococcus pneumoniae Antigen test) were both positive for S. pneumonia in 26 (19.3%) samples and were both negative for S. pneumonia in 92 (68.1%) samples. The accordance rate of mNGS and clinical microbiology tests (culture and/or Alere BinaxNow® Streptococcus pneumoniae Antigen test) for the identification of S. pneumoniae was 87.4% (118/135). The comparison between mNGS and clinical microbiology tests (culture and Alere BinaxNow® Streptococcus pneumoniae Antigen test) was shown in Table . Taking culture as the gold standard, the sensitivity and specificity of S. pneumoniae identification by mNGS were73.1 and 88.1%, respectively. The PPV and NPV of diagnosing S. pneumoniae meningitis by mNGS were59.4 and 93.2%, respectively. Comparing mNGS and the combined tests (culture and Alere BinaxNow® Streptococcus pneumoniae Antigen test), the sensitivity and specificity of mNGS for S. pneumoniae identification were70.3 and 93.9%, respectively, and the PPV and NPV for the identification of S. pneumoniae by mNGS were 81.4 and 89.3%, respectively. In total, we performed qPCR validation for 15 CSF specimens, while insufficient CSF sample was available for qPCR validation in the other cases. These 15 CSF specimens were collected for mNGS and qPCR at the same time. The qPCR results were positive in 9 (9/15, 60%) cases [(mNGS unique reads: 429 (25–341,303)] and negative in 6 (6/15, 40%) cases. The latter 6 cases had fewer unique reads of S. pneumonia, and the number of unique reads was 4, 4, 6, 7, 20 and 27. The clinical microbiology tests (culture and/or Alere BinaxNow® Streptococcus pneumoniae Antigen test) were positive for S. pneumonia in 4 (4/6) cases (unique reads: 4, 6, 7 and 20). One (1/6) case (unique reads: 4) was negative for S. pneumonia in the clinical microbiology tests (blood and/or CSF culture and Alere BinaxNow® Streptococcus pneumoniae Antigen test). Another case (1/6) (P28, unique reads: 27) was positive for Staphylococcus aureus ( S. aureus ) in the blood culture. However, the result of specific PCR from CSF specimens was negative for both S. pneumoniae and S. aureus in P28. In addition, we identified multiple pathogens in the CSF sample of P14 by mNGS, including Escherichia coli ( E. coli ) (unique reads 30) and S. pneumoniae (unique reads 4). In terms of clinical microbiology, P14 had a positive result for S. pneumoniae in the CSF culture and a positive Alere BinaxNow® Streptococcus pneumoniae Antigen test in the CSF. However, the Sanger validation of the CSF sample was negative for both S. pneumoniae and E. coli. In this study, clinical microbiology tests (culture and Alere BinaxNow® Streptococcus pneumoniae Antigen test) were compared with an emerging pathogen detection method (mNGS). When the combined culture and pneumococcal antigen test were regarded as the gold standard, mNGS showed high sensitivity and specificity for the diagnosis of S. pneumonia meningitis. Previous studies have reported that mNGS has different sensitivities and specificities for the identification of different types of pathogens (bacterial, viral or fungal). The sensitivity and specificity of mNGS for the identification of bacteria had a wide range, with sensitivity ranging from 50.7 to 100% and specificity ranging from 76.5 to 87.5% . In this study, some of the CSF specimens were collected at the convalescence stage. If all CSF specimens were collected during the acute phase, the sensitivity of mNGS would be higher. The number of unique reads of S. pneumoniae from CSF collected ≤14 days from the disease onset was significantly higher than those from CSF collected > 14 days from the disease onset. Patients in the unique reads > 100 group had a generally shorter duration from disease onset to the sample collection than those in the unique reads ≤100 group. These results together suggest that the number of unique reads of S. pneumoniae detected by mNGS was associated with specimen collection time, which is consistent with a previous study showing that with the use of effective antibiotics and disease improvement, the number of unique reads of pathogens decreased . In this study, we observed that the time from disease onset to CSF collection was longer for mNGS than for culture. All patients were given antibiotics prior to mNGS and culture, and all patients were treated with antibiotics after the onset of disease. This phenomenon suggested that after antibiotic use, the timespan for positive pathogen identification by mNGS was longer than the timespan for positive pathogen identification by culture. A previous study suggested that the DNA of the pathogen may survive in body fluids for a prolonged period of time, and mNGS was less affected than culture by the use of antibiotics before CSF collection [ , , ]. Among the 6 CSF samples that were negative for S. pneumonia by qPCR, S. pneumonia was identified in 4 (4/6) samples by both mNGS and clinical microbiology tests (blood and/or CSF culture and Alere BinaxNow® Streptococcus pneumoniae Antigen test) . This result showed that mNGS appears to have a higher sensitivity than qPCR in the test cases. S. pneumonia (unique reads 27) was identified in P28 CSF sample by mNGS, while S. aureus was detected by blood culture. The P28 CSF sample was found to be negative for both S. pneumoniae and S. aureus by qPCR. In many mNGS systems, short read sequences lead to difficulties in assembling and matching to the reference sequence, especially in repeat regions, resulting in difficulties defining the final pathogen. Therefore, when the number of unique reads is very small, the interpretation of pathogen data needs to be very careful and cautious . We compared groups with ≤100 unique reads and groups with > 100 unique reads and analysed factors that could affect the number of unique reads of S. pneumoniae . There were significant differences in the simultaneous CSF white blood cell counts and protein levels between the two groups. These results suggested that the inflammation status during simultaneous mNGS detection might determine the number of unique reads of the pathogen. The number of unique reads may indirectly monitor disease progression. Currently, the criteria for diagnosing single pathogens by mNGS are unclear. Different studies have reported different diagnostic criteria, mainly based on the coverage rate, the relative abundance of pathogens or unique reads of pathogens . In this study, the minimum number of unique reads of S. pneumoniae was 4, and the clinical microbiology test was also positive for S. pneumoniae . Therefore, the positive criteria for single pathogen identification by mNGS testing require further research by a larger sample size. There were several limitations in this study. First, the collection time of some CSF samples for mNGS and clinical microbiology tests was different. The collection time of CSF was later for mNGS than for the clinical microbiology tests, which may affect the positive identification rate of mNGS. Some of the CSF specimens were collected at the convalescence stage. Second, this study was retrospective and had a limited number of BM cases, and thus large-scale research is needed. Third, mNGS still has low sequencing depth, and the pathogen database is imperfect . However, sequencing technology is developing rapidly, and it is believed that the depth of sequencing will improve within a short period of time. mNGS can further improve the sensitivity and specificity of pathogen identification . This study is the first to analyse the diagnostic value of mNGS for the identification of a single pathogen ( S. pneumoniae ). mNGS has high sensitivity and specificity for S. pneumoniae identification. Our study also demonstrated the relationship between the pathogen load (unique reads) and CSF collection time. Although there are no uniform criteria for pathogen identification by mNGS, there are many difficulties in interpreting the mNGS results. With the development of mNGS technology, mNGS could be a promising alternative diagnostic tool for pathogen detection.
Effects of group communication norms on daily steps in a team-based financial incentive mobile phone intervention in Shanghai, China
d4d00b77-9c00-4216-b5a5-f3fd4dac7ec8
11748593
Health Promotion[mh]
Regular physical activity reduces the risk of type 2 diabetes, cardiovascular disease, certain cancers, and all-cause mortality [ – ]. Globally, every year, physical inactivity causes more than 5 million deaths and INT$53.8 billion of health care expenditure . In China, 63.5% of the working population engages in work with light occupation activity intensity, and only 6.4% of the working population participates in moderate-to-vigorous intensity leisure-time physical activity for 150 min or more per week . Therefore, workplace interventions, currently recommended as one of the ‘eight best investments that work for physical activity’ , may be a suitable approach to promote physical activity in China . Worldwide, walking is the most popular type of physical activity because it is accessible, requires no special skills or equipment, and is practically free . Furthermore, the ubiquity of smartphones with built-in accelerometers has provided opportunities for physical activity monitoring and the evaluation of interventions . Of the intervention components that could be combined with smartphone tracking, financial incentive interventions have been gaining traction . A recent systematic review and meta-analysis suggested that while financial incentives for physical activity have been proven to be effective, there are still areas for improvement . Firstly, it is challenging to reach the least active with interventions. Studies have shown that workplace interventions generally have a participation rate of less than 50% and that older employees, those with a healthier BMI, and those who are less stressed and more physically active are more willing to participate in workplace physical activity interventions. Those employees who would need the intervention the most are the least likely to participate . One obstacle in getting inactive people active is the delayed health consequences. From a behavioural economics perspective, as the health benefits of physical activity are far in the future, providing immediate positive reinforcement may help people increase their level of physical activity . A meta-analysis of randomized controlled trials (RCTs) found that financial incentive interventions were effective in increasing goal achievement of physical activity during the intervention period . To date, most internet-delivered physical activity interventions lacked interactions and social support, resulting in poor user engagement and retention . Mobile health (mHealth) interventions have been defined as healthcare services or health promotion practices supported by mobile technology and devices, including mobile phone text messages, mobile phone calls, wearable or portable monitoring devices, mobile health applications, and telemedicine . Babcock et al. highlighted the importance of leveraging pre-existing social connections in mHealth interventions . Communication and messaging approaches may foster these connections in physical activity promotion. Effective communication not only reinforces participants' satisfaction with physical activity interventions but also helps to maintain engagement and accountability within a group . In particular, messaging that provides feedback, social support, and reminders can enhance the effect of interventions by keeping participants motivated and informed . Creating social norms, which refer to perceived expectations from others towards a given behaviour , such as physical activity, may help improve the effect of financial incentive interventions. Specifically, injunctive norms refer to individuals’ perceptions of others’ approval of a given behaviour, while descriptive norms focus on individuals’ perceptions of the prevalence of others’ behaviour . Social norms among users of running apps can lead to "herd behaviour," where individuals, through features like following and supporting others, are motivated to align their actions with their social networks . Previous studies mainly focused on descriptive norm-based physical activity interventions among adolescents or office workers, however, the effect of social norm interventions has not been well established [ – ]. Furthermore, little is known about how objectively measured team-based social norms influence individuals’ physical activity in the context of financial incentives. The current study is a team-based financial incentives step count intervention. The objectives of our study were to (1) examine the effect of an intervention to increase daily steps that used a combination of team and individual financial incentives; (2) explore how team-based social norms affect team members’ daily steps by analyzing team chat records. Design and recruitment We conducted a 13-week non-randomized controlled trial step count intervention between September and November 2019. Firstly, based on purposive sampling, 32 work sites in Shanghai were recruited, from which 3,035 participants were invited. As this was a cluster-based trial, after informing participating teams about the financial incentive rules and specific requirements for the intervention, each worksite voluntarily opted for either the intervention or control group, with individuals from the same department allocated to the same team. Additionally, we conducted a clustering analysis, with the results presented in the supplementary file. A total of 92 teams (n = 2,087) for the intervention group were recruited from 20 worksites. The control group included 12 worksites (n = 948). Valid data were provided by 2,049 participants in the intervention group and 936 in the control group. The effective response rates were 98.2% and 98.7% respectively (Fig. ). The inclusion criteria for teams and participants were: (1) at least 80% of the employees was set for each workplace, driven by strong leadership support and high employee willingness to participate, while ensuring representativeness; (2) employees had to be 18 years and above; (3) participants were required to have a smartphone with the function of tracking step counts. Exclusion criteria for participants were: (1) presence of heart disease, cerebrovascular disease, mental illness, or physical disorders; (2) pregnant women, as traditional Chinese cultural practices discourage physical activity among pregnant women, and there are widespread safety concerns regarding exercising during pregnancy, we excluded pregnant women from this study ; (3) employees who will leave the current workplace within the next few months. Unlike the intervention group, the control group consisted of individual participants rather than teams of participants. All participants provided written informed consent before taking part in the study. Intervention Participants in the intervention group were instructed to form a team of 20–25 members and to elect a team captain. The choice of team size was primarily influenced by the typical number of employees within individual work departments, allowing for effective management and communication. Additionally, a larger team size helps foster greater inclusivity and engage diverse employees, including those who are less active. The intervention was developed based on the Theory of Reasoned Action and the Theory of Planned Behaviour, which emphasize the role of subjective and injunctive norms [ , , , ]. We also applied the Behaviour Change Wheel (BCW) to guide intervention development. The BCW is a comprehensive and coherent framework that integrates behavioural theory to understand the specific behaviour change mechanisms within an intervention . The BCW offers an effective approach for designing interventions that are tailored to the specific context and population, and it has been used to guide behaviour change intervention in the workplace . The second layer of the BCW outlines a set of nine intervention functions (Education, Persuasion, Incentivisation, Coercion, Training, Enablement, Modelling, Environmental Restructuring, and Restrictions), which describe different ways an intervention can influence behaviour. The intervention was designed with the concept that participants can be supported and nudged by team members. All participants were asked to complete a questionnaire, including sociodemographic characteristics. The online intervention was conducted by the CXA Group (/ www.cxagroup.com/about/overview ), an automated information technology platform that integrates wireless devices, enrollment processes, messaging, self-administered surveys, and automatic transfers of financial incentives. Daily step counts were measured by WeRun, a social fitness plugin for WeChat, the most widely used social media platform in China. The daily step counts of participants were obtained via a cloud-based secure server of CXA, which can synchronise their daily participation records with step counts. Before the start of the intervention, the team leaders were informed about the rules for scoring and awards. No other formal instruction or training was provided to the team leaders. Each leader and their team members worked together towards achieving the preset step count goals. The collaborative effort was primarily self-motivated, driven by the shared goal of accumulating points and improving team performance. Participants in the intervention group were given a goal of achieving at least 10,000 steps per day based on a health promotion program led by the Chinese Center for Disease Control and Prevention and existing literature indicating that this target is associated with a reduced risk of all-cause mortality and incident cancer and cardiovascular diseases . Participants in the control group were not provided with a specific goal as they continued with their usual activity levels. Additionally, 1–2 weekly tweets on physical activity knowledge and skills were posted on the WeChat official account. The platform delivered daily prompts to participants individually, to collect their step count as measured by a phone-based accelerometer and team members received points for confirming daily attendance and additional points for achieving the daily step count goal. The personal total score was the sum of daily points. The team score was the average of the individual total scores of all team members. The team score was ranked weekly, with the overall ranking determined after 13 weeks of intervention. Financial incentives included 3 components: (1) weekly team award: according to the weekly score of each team, and the top 50% of all teams received a financial reward of 200 RMB (100 RMB, equivalent to US$14.3 in 2019) every week. (2) Final team award: the top 50% of the teams based on overall ranking after 13 weeks received a financial reward of 3,000 RMB (US$429). (3) Individual incentives: Based on the total individual scores, the top 50 individual members received a financial reward of 200 RMB (US$28.6) (Table ). The participants in the control group were only required to complete the basic sociodemographic questionnaire. To record the step counts in the control group, the platform also delivered daily prompts. Once the daily prompt was completed, participants in the control group received 0.1RMB (US$0.014) per day. There was no other intervention applied to the control group. Table illustrates how specific intervention measures were aligned with the key components of the TPB and BCW. It provides a clear overview of how each intervention element corresponds to the theoretical framework, ensuring that the intervention design is rooted in established behavioural theories. Measures Demographic variables Demographic characteristics of the participants were collected by an online self-administered questionnaire, including date of birth, gender, marital status, and education. Team-based social norms During the 13-week intervention period, each team in the intervention group established a "WeChat" group for communicating and administration. Out of 92 teams, 41, consisting of 906 participants, provided the completed team chat records (3,268 valid records) for which a content analysis was conducted. Firstly, to develop an effective coding method and standard of classification, the researchers randomly selected the chat records of 8 of the 41 teams. Then, two researchers read all chat records independently to identify themes and categorized them into four communication types. The final coding systems consisted of 4 types of communication:(1) nudging, (2) sharing, (3) feedback and (4) other. The 'other' category included messages that may not fit neatly into the first three types such as casual conversations that were unrelated to physical activity, as shown in Table . The coding of all the valid records was performed independently by the same two researchers in Microsoft Excel. Each record was coded by only one classification. Then the frequency of nudging, sharing and feedback were calculated for each team every week respectively. Step counts Based on a pilot study conducted before the intervention, step counts according to WeRun correlated strongly (Spearman’s correlation coefficient 0.766, p < 0.001) with those measured by a hip-worn accelerometer (Actigraph GT3x-BT, Pensacola, FL, USA). Days with less than 1,000 were considered invalid and daily steps were truncated at 30,000 steps/day . The primary outcome was mean daily valid steps during the intervention period (weeks 1–13). Baseline step levels were determined by calculating the average daily steps of participants over the two weeks prior to the intervention. The outcome variables were calculated based on team and individual levels, respectively. Data analysis Demographics and baseline characteristics, such as age, gender, education level, marriage, and average daily valid steps, were summarized separately for intervention and control groups. We summarized the descriptive statistics of the continuous (mean, standard deviation), and categorical variables (number and proportions of participants in each category). Differences in characteristics between the groups were tested with Chi-square for categorical variables, and one-way ANOVA was used for normally distributed continuous variables. We used a distributed lag non-linear model (DLNM), including quasi-Poisson regression, with the following lag structures [multi-day lag (01–03), (01–07), (01–30)] to identify the cumulative effects of team-based social norms related to communication (nudging, sharing and feedback) on average steps per week. The DLNM model was as follows: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text{log}[E\left({Y}_{t}\right)]=intercept+ns\left(week,3\right)+age+gender+cb({X}_{i},lag=13)$$\end{document} log [ E Y t ] = i n t e r c e p t + n s w e e k , 3 + a g e + g e n d e r + c b ( X i , l a g = 13 ) where: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$cb$$\end{document} cb is the cross-basis function, which simultaneously specifies the exposure–lag–response relationship in the exposure–response and lag–response dimensions. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${X}_{i}$$\end{document} X i is nudging/sharing/feedback. Due to the hierarchical structure of the data, we employed a two-level random-effect regression analysis to examine the associations between nudging, sharing, feedback and average steps per week, adjusting for age, gender, marriage status, education level, and baseline average steps per week. First, a null model was constructed to assess whether the average steps per week varied between groups by the intra-class correlation coefficient (ICC) index. The ICC was defined as the between-group variance divided by the within-group variance plus the between-group variance. Next, we examined the individual and group-level variables in two separate models to examine their effects on average steps per week. Finally, we used the random intercept and slope model, and by putting both individual and group-level variables in the model to examine their interaction effects. Alpha was set at 0.05 (two-sided). Data were stored in Microsoft SQL Server 2022. and statistical analyses were performed using R software version 4.1.2 ( https://www.r-project.org/ ) with the “DLNM” and “lme4” packages. We conducted a 13-week non-randomized controlled trial step count intervention between September and November 2019. Firstly, based on purposive sampling, 32 work sites in Shanghai were recruited, from which 3,035 participants were invited. As this was a cluster-based trial, after informing participating teams about the financial incentive rules and specific requirements for the intervention, each worksite voluntarily opted for either the intervention or control group, with individuals from the same department allocated to the same team. Additionally, we conducted a clustering analysis, with the results presented in the supplementary file. A total of 92 teams (n = 2,087) for the intervention group were recruited from 20 worksites. The control group included 12 worksites (n = 948). Valid data were provided by 2,049 participants in the intervention group and 936 in the control group. The effective response rates were 98.2% and 98.7% respectively (Fig. ). The inclusion criteria for teams and participants were: (1) at least 80% of the employees was set for each workplace, driven by strong leadership support and high employee willingness to participate, while ensuring representativeness; (2) employees had to be 18 years and above; (3) participants were required to have a smartphone with the function of tracking step counts. Exclusion criteria for participants were: (1) presence of heart disease, cerebrovascular disease, mental illness, or physical disorders; (2) pregnant women, as traditional Chinese cultural practices discourage physical activity among pregnant women, and there are widespread safety concerns regarding exercising during pregnancy, we excluded pregnant women from this study ; (3) employees who will leave the current workplace within the next few months. Unlike the intervention group, the control group consisted of individual participants rather than teams of participants. All participants provided written informed consent before taking part in the study. Participants in the intervention group were instructed to form a team of 20–25 members and to elect a team captain. The choice of team size was primarily influenced by the typical number of employees within individual work departments, allowing for effective management and communication. Additionally, a larger team size helps foster greater inclusivity and engage diverse employees, including those who are less active. The intervention was developed based on the Theory of Reasoned Action and the Theory of Planned Behaviour, which emphasize the role of subjective and injunctive norms [ , , , ]. We also applied the Behaviour Change Wheel (BCW) to guide intervention development. The BCW is a comprehensive and coherent framework that integrates behavioural theory to understand the specific behaviour change mechanisms within an intervention . The BCW offers an effective approach for designing interventions that are tailored to the specific context and population, and it has been used to guide behaviour change intervention in the workplace . The second layer of the BCW outlines a set of nine intervention functions (Education, Persuasion, Incentivisation, Coercion, Training, Enablement, Modelling, Environmental Restructuring, and Restrictions), which describe different ways an intervention can influence behaviour. The intervention was designed with the concept that participants can be supported and nudged by team members. All participants were asked to complete a questionnaire, including sociodemographic characteristics. The online intervention was conducted by the CXA Group (/ www.cxagroup.com/about/overview ), an automated information technology platform that integrates wireless devices, enrollment processes, messaging, self-administered surveys, and automatic transfers of financial incentives. Daily step counts were measured by WeRun, a social fitness plugin for WeChat, the most widely used social media platform in China. The daily step counts of participants were obtained via a cloud-based secure server of CXA, which can synchronise their daily participation records with step counts. Before the start of the intervention, the team leaders were informed about the rules for scoring and awards. No other formal instruction or training was provided to the team leaders. Each leader and their team members worked together towards achieving the preset step count goals. The collaborative effort was primarily self-motivated, driven by the shared goal of accumulating points and improving team performance. Participants in the intervention group were given a goal of achieving at least 10,000 steps per day based on a health promotion program led by the Chinese Center for Disease Control and Prevention and existing literature indicating that this target is associated with a reduced risk of all-cause mortality and incident cancer and cardiovascular diseases . Participants in the control group were not provided with a specific goal as they continued with their usual activity levels. Additionally, 1–2 weekly tweets on physical activity knowledge and skills were posted on the WeChat official account. The platform delivered daily prompts to participants individually, to collect their step count as measured by a phone-based accelerometer and team members received points for confirming daily attendance and additional points for achieving the daily step count goal. The personal total score was the sum of daily points. The team score was the average of the individual total scores of all team members. The team score was ranked weekly, with the overall ranking determined after 13 weeks of intervention. Financial incentives included 3 components: (1) weekly team award: according to the weekly score of each team, and the top 50% of all teams received a financial reward of 200 RMB (100 RMB, equivalent to US$14.3 in 2019) every week. (2) Final team award: the top 50% of the teams based on overall ranking after 13 weeks received a financial reward of 3,000 RMB (US$429). (3) Individual incentives: Based on the total individual scores, the top 50 individual members received a financial reward of 200 RMB (US$28.6) (Table ). The participants in the control group were only required to complete the basic sociodemographic questionnaire. To record the step counts in the control group, the platform also delivered daily prompts. Once the daily prompt was completed, participants in the control group received 0.1RMB (US$0.014) per day. There was no other intervention applied to the control group. Table illustrates how specific intervention measures were aligned with the key components of the TPB and BCW. It provides a clear overview of how each intervention element corresponds to the theoretical framework, ensuring that the intervention design is rooted in established behavioural theories. Demographic variables Demographic characteristics of the participants were collected by an online self-administered questionnaire, including date of birth, gender, marital status, and education. Team-based social norms During the 13-week intervention period, each team in the intervention group established a "WeChat" group for communicating and administration. Out of 92 teams, 41, consisting of 906 participants, provided the completed team chat records (3,268 valid records) for which a content analysis was conducted. Firstly, to develop an effective coding method and standard of classification, the researchers randomly selected the chat records of 8 of the 41 teams. Then, two researchers read all chat records independently to identify themes and categorized them into four communication types. The final coding systems consisted of 4 types of communication:(1) nudging, (2) sharing, (3) feedback and (4) other. The 'other' category included messages that may not fit neatly into the first three types such as casual conversations that were unrelated to physical activity, as shown in Table . The coding of all the valid records was performed independently by the same two researchers in Microsoft Excel. Each record was coded by only one classification. Then the frequency of nudging, sharing and feedback were calculated for each team every week respectively. Step counts Based on a pilot study conducted before the intervention, step counts according to WeRun correlated strongly (Spearman’s correlation coefficient 0.766, p < 0.001) with those measured by a hip-worn accelerometer (Actigraph GT3x-BT, Pensacola, FL, USA). Days with less than 1,000 were considered invalid and daily steps were truncated at 30,000 steps/day . The primary outcome was mean daily valid steps during the intervention period (weeks 1–13). Baseline step levels were determined by calculating the average daily steps of participants over the two weeks prior to the intervention. The outcome variables were calculated based on team and individual levels, respectively. Demographic characteristics of the participants were collected by an online self-administered questionnaire, including date of birth, gender, marital status, and education. During the 13-week intervention period, each team in the intervention group established a "WeChat" group for communicating and administration. Out of 92 teams, 41, consisting of 906 participants, provided the completed team chat records (3,268 valid records) for which a content analysis was conducted. Firstly, to develop an effective coding method and standard of classification, the researchers randomly selected the chat records of 8 of the 41 teams. Then, two researchers read all chat records independently to identify themes and categorized them into four communication types. The final coding systems consisted of 4 types of communication:(1) nudging, (2) sharing, (3) feedback and (4) other. The 'other' category included messages that may not fit neatly into the first three types such as casual conversations that were unrelated to physical activity, as shown in Table . The coding of all the valid records was performed independently by the same two researchers in Microsoft Excel. Each record was coded by only one classification. Then the frequency of nudging, sharing and feedback were calculated for each team every week respectively. Based on a pilot study conducted before the intervention, step counts according to WeRun correlated strongly (Spearman’s correlation coefficient 0.766, p < 0.001) with those measured by a hip-worn accelerometer (Actigraph GT3x-BT, Pensacola, FL, USA). Days with less than 1,000 were considered invalid and daily steps were truncated at 30,000 steps/day . The primary outcome was mean daily valid steps during the intervention period (weeks 1–13). Baseline step levels were determined by calculating the average daily steps of participants over the two weeks prior to the intervention. The outcome variables were calculated based on team and individual levels, respectively. Demographics and baseline characteristics, such as age, gender, education level, marriage, and average daily valid steps, were summarized separately for intervention and control groups. We summarized the descriptive statistics of the continuous (mean, standard deviation), and categorical variables (number and proportions of participants in each category). Differences in characteristics between the groups were tested with Chi-square for categorical variables, and one-way ANOVA was used for normally distributed continuous variables. We used a distributed lag non-linear model (DLNM), including quasi-Poisson regression, with the following lag structures [multi-day lag (01–03), (01–07), (01–30)] to identify the cumulative effects of team-based social norms related to communication (nudging, sharing and feedback) on average steps per week. The DLNM model was as follows: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text{log}[E\left({Y}_{t}\right)]=intercept+ns\left(week,3\right)+age+gender+cb({X}_{i},lag=13)$$\end{document} log [ E Y t ] = i n t e r c e p t + n s w e e k , 3 + a g e + g e n d e r + c b ( X i , l a g = 13 ) where: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$cb$$\end{document} cb is the cross-basis function, which simultaneously specifies the exposure–lag–response relationship in the exposure–response and lag–response dimensions. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${X}_{i}$$\end{document} X i is nudging/sharing/feedback. Due to the hierarchical structure of the data, we employed a two-level random-effect regression analysis to examine the associations between nudging, sharing, feedback and average steps per week, adjusting for age, gender, marriage status, education level, and baseline average steps per week. First, a null model was constructed to assess whether the average steps per week varied between groups by the intra-class correlation coefficient (ICC) index. The ICC was defined as the between-group variance divided by the within-group variance plus the between-group variance. Next, we examined the individual and group-level variables in two separate models to examine their effects on average steps per week. Finally, we used the random intercept and slope model, and by putting both individual and group-level variables in the model to examine their interaction effects. Alpha was set at 0.05 (two-sided). Data were stored in Microsoft SQL Server 2022. and statistical analyses were performed using R software version 4.1.2 ( https://www.r-project.org/ ) with the “DLNM” and “lme4” packages. Descriptive statistics Descriptive statistics of the individuals are presented in Table . Of the 2,985 participants in our study, 2,049 were in the intervention group and 936 in the control group. In the intervention group, participants were grouped into 92 teams according to their workplaces. In both the intervention (77.0%) and the control group (53.6%) there were more women than men. The intervention group was younger than the control group (mean = 35.8; SD = 11.0; mean = 40.1; SD = 21.4 years, respectively). The proportion of the participants who were married was 72.4% and 84.9% in the intervention and control groups respectively. The proportion of participants with a high education level (Bachelor and above) was lower in the intervention group (56.6%) than in the control group (68.3%). In the intervention group, 19.0% of the participants were healthcare workers versus 51.8% in the control group. Baseline weekly average steps were higher in the intervention group than in the control group (8,446 ± 4,126 vs 7,060 ± 3,210). Figure shows differences in weekly average steps between the intervention and the control group at baseline and follow-up. In the intervention group, compared to baseline, weekly average steps significantly increased at follow-up, but this trend was not seen in the control group. Intervention effects As shown in Fig. , at Week 1, the intervention group demonstrated a significant increase in weekly average steps from baseline (average increase = 2,740, 95%CI: 2,583 to 2,897), while the control group had an average increase of 349 steps (95%CI: 169 to 529). The intervention group had a peak in weekly average steps after 4 weeks, with an average increase of 3,046 steps (95%CI: 2,874 to 3,218). The control group also experienced a peak in weekly average steps after 4 weeks, but with a lower average increase of 887 steps (95%CI: 673 to 1,101). At the end of the last week of follow-up, the weekly average steps of the intervention group increased by 1,777 (95%CI: 1,584 to 1,971) while the weekly average steps of the control group decreased by 11 (95%CI: -255 to 232). Over the 13-week intervention period, the weekly average steps of the intervention group had an increase of 2,523, while the control group saw an increase of 470 steps. Team communication types and intervention effects Figure shows the non-linear cumulative relationships between individual team-based communication types and average weekly steps. Findings suggested that the association between individual communication styles and step counts may differ at different stages of the intervention (3, 6, and 13 weeks). In the short-term (3 weeks of follow-up), the frequency of nudging of 7–18 times/week had a positive cumulative effect on the step counts. Sharing more than 3 times/week had a positive cumulative effect, while feedback at any frequency had a negative effect. Over 6 and 13 weeks of follow-up, nudging 19 times/week or more had a positive cumulative effect while sharing and feedback at any frequency negatively affected average weekly steps. Descriptive statistics of the individuals are presented in Table . Of the 2,985 participants in our study, 2,049 were in the intervention group and 936 in the control group. In the intervention group, participants were grouped into 92 teams according to their workplaces. In both the intervention (77.0%) and the control group (53.6%) there were more women than men. The intervention group was younger than the control group (mean = 35.8; SD = 11.0; mean = 40.1; SD = 21.4 years, respectively). The proportion of the participants who were married was 72.4% and 84.9% in the intervention and control groups respectively. The proportion of participants with a high education level (Bachelor and above) was lower in the intervention group (56.6%) than in the control group (68.3%). In the intervention group, 19.0% of the participants were healthcare workers versus 51.8% in the control group. Baseline weekly average steps were higher in the intervention group than in the control group (8,446 ± 4,126 vs 7,060 ± 3,210). Figure shows differences in weekly average steps between the intervention and the control group at baseline and follow-up. In the intervention group, compared to baseline, weekly average steps significantly increased at follow-up, but this trend was not seen in the control group. As shown in Fig. , at Week 1, the intervention group demonstrated a significant increase in weekly average steps from baseline (average increase = 2,740, 95%CI: 2,583 to 2,897), while the control group had an average increase of 349 steps (95%CI: 169 to 529). The intervention group had a peak in weekly average steps after 4 weeks, with an average increase of 3,046 steps (95%CI: 2,874 to 3,218). The control group also experienced a peak in weekly average steps after 4 weeks, but with a lower average increase of 887 steps (95%CI: 673 to 1,101). At the end of the last week of follow-up, the weekly average steps of the intervention group increased by 1,777 (95%CI: 1,584 to 1,971) while the weekly average steps of the control group decreased by 11 (95%CI: -255 to 232). Over the 13-week intervention period, the weekly average steps of the intervention group had an increase of 2,523, while the control group saw an increase of 470 steps. Figure shows the non-linear cumulative relationships between individual team-based communication types and average weekly steps. Findings suggested that the association between individual communication styles and step counts may differ at different stages of the intervention (3, 6, and 13 weeks). In the short-term (3 weeks of follow-up), the frequency of nudging of 7–18 times/week had a positive cumulative effect on the step counts. Sharing more than 3 times/week had a positive cumulative effect, while feedback at any frequency had a negative effect. Over 6 and 13 weeks of follow-up, nudging 19 times/week or more had a positive cumulative effect while sharing and feedback at any frequency negatively affected average weekly steps. In this financial incentive intervention, we used 41,790 person-days of step counts and 3,268 chat records over 13 weeks to examine the effects of both team- and individual-based financial incentives on step counts and of team-level social norms as reflected by group chat history. Our study found that the financial incentive intervention was effective in increasing step counts. Interestingly, our findings suggest that team communication type that reflects the social norms of nudging, sharing and feedback might have different effects on team-level step counts. This raises the possibility that the communication within the team contributed to the intervention's success, alongside the financial incentives. The financial incentives may have been a primary motivator. Due to the team-based design of the intervention, team members spontaneously communicated to achieve group rewards. It is plausible that team communication and reinforcing behavioural norms played a crucial role in sustaining or amplifying the observed effects. Future studies could disentangle these components by testing interventions with and without financial incentives or focusing solely on the communication aspect of intervention to better understand the contribution of individual components. To our knowledge, this is the first study on the effects of team-level social norms with continuous measurements of both social norms related to communication and device-measured physical activity. These findings improve our understanding of team-based and financial incentive interventions for walking behaviours. Over the 13-week intervention period, the weekly average steps of the intervention group increased by 2,523, while the control group increased by 470 steps. The effect of team-based financial incentives on step change was larger than in most other similar interventions. A systematic review and meta-analysis showed that in 12 of 23 included studies, financial incentives were associated with an average increase in daily steps of 607.1 during the intervention period (95% CI: 422.1 to 792.1; range from 93.0 to 3,907.0) . Specifically, in an RCT from the USA on financial incentives to increase physical activity, the combined individual and team incentives led to an average of 1,446 more daily steps than in the control group . A 24-week quasi-experimental study from Canada included 61,170 users and showed that adding team-based incentives increased mean daily steps by 537 . We also found that the step counts first increased and then decreased during the 13 weeks of follow-up, which is consistent with other similar studies . This trend may be related to intervention fatigue, which means that the novelty of the intervention declines, which weakens its effect gradually. Unfortunately, our intervention ended after 13 weeks and the evaluation of long-term sustainability was disabled by the subsequent COVID-19 outbreak and lockdown. Improving the sustainability of the intervention effect is a substantial challenge. Intrinsic motivation is often associated with the feelings of enjoyment, pleasure, or satisfaction , which may be fostered through adopting new habits of physical activity or obtaining goals. Furthermore, interventions like ours may also help establish a physical activity-friendly culture by changing social norms . Weekly score rankings have been shown to enhance the effectiveness of physical activity interventions by motivating group members and sustaining improvements during follow-up periods . Similarly, a recent intervention study among older women in Japan found that team-based social network incentives, which leverage social norms alongside financial incentives, were more effective than financial incentives alone. Notably, these effects persisted even after the intervention ended , suggesting that the presence of social norms may have a positive impact on participants' intrinsic motivation to engage in physical activity. The combination of stimulating individual intrinsic motivation and establishing team social norms may improve the sustainability of the intervention effect. Future studies should explore the long-term effects and sustainability of physical activity interventions. It is worth noting that our team-based financial incentive intervention involved weekly team rewards and final rewards for both teams and individuals. In addition, using insights from behavioural economics, weekly team rewards can help not only increase immediate feedback but also strengthen the relationship between team members . We explored how team-based social norms related to communication types affected team-level steps through a prospective study design. Analyses showed that non-linear cumulative relationships in different time lengths were also examined. For nudging, the frequency between 7–18 messages per week had a positive cumulative effect on team steps in the short-term (3 weeks of follow-up), while in the medium (6 weeks) and long term (13 weeks of follow-up), nudging 19 times or more showed a positive cumulative effect. Nudging from team members, family, friends and others can serve as external environmental factors as per BCW, influencing an individual's behavioural capacity, opportunities and motivation, and thus facilitating or impeding changes in physical activity behaviours . For example, in a study that applied the BCW to improve the physical activity level of diabetes patients, text messages and nudging increased capacities and opportunities for physical activity . For messages classified as ‘sharing’, more than 3 times a week had a positive cumulative effect in the short-term, while in the medium and long term sharing at any frequency had a negative cumulative effect on weekly average steps. Sharing and encouragement among teammates could influence an individual's perception of subjective norms and behavioural control, through which positive behaviour change could be engendered. Sharing, including discussions and conversations, not only fulfills individuals' social needs but also significantly influences group behaviours by shaping social norms and fostering unconscious behavioural mimicry. This social contagion effect underscores the importance of shared experiences in fostering a supportive environment for physical activity . For example, compared to non-app users, app users demonstrated a stronger intention to maintain their running behaviour and more frequently encouraged other participants, likely due to the interactive features of the apps . In terms of ‘feedback’ messages, whether in the short, medium or long term, any frequency had a negative cumulative effect. Evidence from a meta-analysis on the effects of feedback on health behaviours suggests that the impact of feedback on increasing physical activity is mixed, this may be due to different attributes of the feedback (e.g., frequency, personalisation, graphical versus text-based formats), which can have varying influences on the outcomes . In some cases, feedback may provoke negative emotions that diminish motivation for healthy behaviours, potentially leading to disengagement. As shown in our study not all the feedback messages about team ranking results were positive, and less constructive feedback may have discouraged team members or led to them giving up on achieving the step goals subsequently. For messages classified as ‘nudging’, a stronger association with cumulative step counts was observed at higher frequencies. This may be because the goal of nudging is clearer than that of sharing and feedback, so it is easier to form subjective norms and herd behaviour. Our study did not find sharing and feedback to have a positive cumulative effect in the medium and long term. This result may be explained by insufficient follow-up time or insufficient frequency of sharing and feedback. Future studies should determine the potential dose–response effects of feedback and sharing on walking behaviour change. A significant contribution of this study was to provide the evidence that, within a financially incentivised physical activity intervention, different types of social norm-related communication within teams had varying effects on the intervention outcomes. Specifically, 'nudging' messages showed the strongest positive impact on improving the performance of the intervention. Compared with existing studies, our study has several important strengths, such as continuously measured social norms related to communication, including messages of ‘nudging’, ‘feedback’ and ‘sharing’ nature by collecting chat records over 13 weeks of follow-up. Moreover, we examined the cumulative effects of social norms related to communication. This study has some limitations. First, for practical reasons, randomisation of participants into intervention and control groups was not feasible in this quasi-experimental study. Instead, employers self-selected their participation into either the intervention or control group, which may have introduced selection bias and resulted in different demographic characteristics (e.g., gender) between the two groups. Although these demographic characteristics were adjusted for in the constructed model, such baseline differences may still lead to variations in responsiveness to the intervention, potentially impacting the estimation of the intervention effect. Second, this intervention provided a common step target for all participants. While 10,000 steps/day may be a suitable target for some, it may not be achievable for others, which may have limited its effectiveness in some participants. Future interventions should consider incorporating individualised goals. Third, the intervention period was only 13 weeks, which was shorter than other similar studies . We planned for longer-term data collection to examine the maintenance of the intervention effects post-intervention but were unfortunately unable to do so due to the subsequent COVID-19 pandemic and lockdown. However, despite the relatively short follow-up we managed to capture meaningful increases in step counts in the intervention group. Fourth, compared to a dedicated activity tracker, smartphones may lack accuracy due to various factors, such as the frequently and location of carrying the phone. Despite limitations, we considered using smartphones for step tracking a pragmatic approach because 1) it didn’t involve additional participate burden as all participants already used Wechat and Werun and therefore was likely to result in better adherence; and 2) if the sources of bias were random, the lack of accuracy may be less of a problem because intervention and control groups both used the smartphone tracker before, during and after the intervention. Additionally, the impact of the 'other' communication type on the effectiveness of physical activity interventions may have been overlooked in this study. Finally, we did not include pregnant women in this study due to culturally specific safety considerations, despite physical activity being widely recommended to pregnant women internationally . Because physical activity during pregnancy is discouraged according to traditional Chinese culture, we were concerned that including pregnant women could lead to safety concerns and negatively affecting the program’s reputation, thus undermining recruitment and engagement. This team-based financial incentive intervention significantly increased daily step counts. Based on objectively measured step counts and more than 3,000 chat records, our study revealed that the team-level social norms related to communications, including nudging, sharing and feedback, might have different cumulative effects on team-level physical activity. ‘nudging’ messages, but not the other two types of messages, had a significant association with the change in individual-level step counts in the medium or long term. Future interventions should capitalize on health-promoting social norms to maximize intervention effects. Supplementary Material 1.
Endodontic retreatment decision‐making: The influence of the framing effect
e4f6e707-6215-4f74-b304-cdacbbf0b4a2
10098276
Dental[mh]
INTRODUCTION Epidemiological studies have reported a prevalence of periapical radiolucencies in root‐filled teeth of between 12% and 72% (Kielbassa et al., ; Pak et al., ; Silnovic et al., ). Although diagnosis is not always straightforward, most cases are caused by an inflammatory lesion, apical periodontitis (AP). As AP in root‐filled teeth tends to remain more or less asymptomatic over many years, its first diagnosis is often made during a routine examination or as an incidental finding. According to the prevailing academic paradigm, a lesion diagnosed as AP in association with a root‐filled tooth is defined as an “endodontic failure,” and thus implies a clinical decision and action (Reit & Kvist, ; Strindberg, ). For this reason, endodontology scholars since at least the 1980s have been puzzled and annoyed by recurrently shown variation in clinical decisions about root‐filled teeth with AP, and particularly by practitioners’ reluctance to suggest and institute an endodontic retreatment procedure (Kvist et al., ; Reit et al., ; Taha et al., ). The many complex factors involved in the clinical decision‐making process have made it difficult to present a coherent model for explaining and understanding these variations (Kvist, ). However, there is good reason to assume that these variations can be attributed to two main categories of uncertainty: facts and values (Kvist & Reit, ). With regard to facts, solid scientific evidence is lacking on questions regarding both the diagnosis of a “failure” and the outcome of retreatment or a no‐intervention alternative (Frisk & Kvist, ). With regard to values, the variations in question may stem from different perceptions of disease, educational contexts, and values concerning illness and health (Kvist & Reit, ). Due to the great uncertainties, authors have emphasized the importance of the patient's right to autonomy and hence participation in the process involving decisions on retreatment (Azarpazhooh et al., ; Kvist & Hofmann, ; Kvist & Reit, ). Autonomy, or self‐determination, means that an individual has the right to decide on matters regarding his or her own body, mind, and life. The right to autonomy has a strong foundation in various ethical theories (Beauchamp & Childress, ). As the concept of autonomy also includes an individual's right to decide on his or her healthcare, any two‐way communication process involving information sharing and decision‐making should always precede a medical or dental decision on treatment or refraining from it (World Health Organization, ). For a patient to be able to make an autonomous decision, the dentist must therefore provide the patient with all relevant facts: the findings, the etiology of the disorder, the various options available for dealing with it, and the risks, costs, probable outcome, and long‐term prognosis (Kvist, ; Kvist & Hofmann, ). Like in many other clinical situations, in the case of a root‐filled tooth with AP, many of the facts that are required for the provision of valid evidence‐based information are missing or highly uncertain (Frisk & Kvist, ; Kvist & Hofmann, ). There is also the matter of how the available information should be presented. A choice between options can be framed in different ways. The framing effect, which was first recognized by Tversky and Kahnemann in 1981 (Tversky & Kahneman, ), is described as a cognitive bias whereby people decide on options on the basis of whether they—the options—are presented with positive or negative connotations. Although this cognitive bias effect has been explored in several medical decision‐making contexts (Gong et al., ), its relevance has been met with very little interest among clinical researchers in dentistry (Arora, ). But in one study by Foster & Harrison ( ), first‐year dental students simulated the role of patients in an experiment on the effect of framing in an endodontic decision‐making situation (Foster & Harrison, ). In a scenario involving a symptomatic tooth with failed endodontic therapy, they were asked to select one of two treatment options: nonsurgical endodontic retreatment, or extraction and implant placement. Their selection of treatment was significantly influenced by biased presentations. The present study was set up to explore the possible influence of a framing effect when an individual is asked to choose between no intervention and retreatment of a root‐filled tooth presenting with asymptomatic AP. MATERIAL AND METHODS 2.1 Participants A total of 248 individuals (74 men and 173 women) who studied or worked within the area of dentistry were recruited on a voluntary basis. This number included 121 dental students, all of whom were studying at the Institute of Odontology at Sahlgrenska Academy, University of Gothenburg, Sweden. They had reached various training levels, with 49 in the first year, 29 in the second, and 43 in the third. Seventy‐four participants were drawn from the staff at the Institute of Odontology: 32 dentists, 7 dental hygienists, 32 dental nurses, and 2 people in administration and reception. The 53 remaining participants consisted of general dentists, both private and public employees, who were attending a course in endodontics at the Gothenburg Dental Society. 2.2 Questionnaire Two variants of a questionnaire were created, each designed to cause a respondent to answer from his or her perspective as a potential patient. The description of the clinical decision‐making situation was simple and patient‐oriented. Although the clinical situation and information were identical in both questionnaires, the two alternative treatment options were systematically framed in two different ways. The clinical situation was described as follows: Imagine that 5 years ago you were involved in a bicycle accident in which you injured your upper left central tooth. As a result, the tooth needed root canal treatment. It was opened up, cleaned of bacteria, and filled with a rubber‐like material. Finally, it was sealed with a plastic filling. Since then, you have not experienced any problems with this tooth . When you come to your dentist for your annual routine checkup today, the dentist decides to take a radiograph of the tooth. This shows a lesion in the bone around the tip of the tooth . On the radiograph, you can see that the bone around the root tip of the tooth is a little darker. This indicates that there are bacteria left inside the root canal that cause inflammation, which in turn becomes visible on the radiograph. The root filling appears to be short and incomplete . Your dentist presents two options. You have free dental care, so any treatment you choose is free of charge . In the first variant of the questionnaire, the intention was to frame the options in favor of refraining from retreatment now and of waiting and seeing (FW). Option A (Wait) . Refrain from retreatment now and wait and see. The chance that the tooth will be asymptomatic for the rest of your life is approximately 90%. The chance that any remaining infection will have no negative effect on your overall health is more than 99% . Option B (Retreat) . Retreatment, which involves remaking the root‐canal treatment so that the root filling becomes dense and is the correct length. Despite retreatment, the risk that the inflammation will not heal is approximately 25% . In the second variant of the questionnaire, the intention was to frame the options in favor of retreatment (FR). Option A (Wait) . Refrain from retreatment now and wait and see. The risk that the tooth later will become symptomatic in the form of pain and/or swelling that requires treatment is approximately 10%. The risk that any remaining infection has a negative effect on your overall health is less than 1% . Option B (Retreat) . Retreatment, which involves remaking the root‐canal treatment so that the root filling becomes dense and is the correct length. The chance that the inflammation will heal is approximately 75% . At the end of the questionnaire, respondents were requested to register their gender, age, and occupation, or, if they were dental students, to state their year of study. The two variants of the questionnaire are presented in Figure . Figure 1 (a and b) The two different versions of the questionnaire (a = favoring wait and see [FW] and b = favoring retreatment [FR]) distributed to the participants in the study. 2.3 Distribution and procedures The same number of copies (150) of questionnaire variants FW and FR were printed and sorted into a stack in which FW consistently alternated with FR. The questionnaires were distributed on five different occasions. First, two of the authors (Agnesa Smakiqi and Daniela Henelius) gave a short introduction in which the participants were told, that the questionnaire was supposed to provide the basis for a Master's thesis on clinical decision‐making in root‐filled teeth. Participants were also informed that reading and answering would require no more than 10 min, that all answers would be anonymous, and that as participation was completely voluntary, the questionnaire could also be returned unanswered. Participants were asked not to communicate with each other when completing the questionnaire. The real purpose of the study was concealed from the participants, as was the fact that two different variants of the questionnaire would be distributed. The stack of questionnaires was distributed, with each participant receiving only one questionnaire. The questionnaire was distributed to the students in the lecture hall during a selected lecture and to the staff at the Folktandvården Education Clinic for Dentistry during a staff meeting. These questionnaires were all collected immediately. Distribution of the questionnaire among dentists took place during an evening course organized by Gothenburg Dental Association (GTS). These participants were required to submit the questionnaire either immediately, or by post in a prestamped letter to the Department of Endodontology. The information from each questionnaire was then transferred to an Excel data sheet (Microsoft Corp). 2.4 Ethical considerations This study was originally a part of a master's thesis at the Institute of Odontology, Sahlgrenska Academy, University of Gothenburg, Sweden. No patient or patient data was involved in the study, except for an anonymous radiograph, of which the patient had given consent to be used for the purpose. All responders to the questionnaire were informed that answering would be anonymous, and that participation was completely voluntary, the questionnaire could be returned unanswered without registration. 2.5 Statistical methods Before the statistical analysis, participants were divided by gender and into three groups on the basis of age: 18–25, 26–49, and 50+. We also categorized the respondents as follows: students and of their training level (1st, 2nd, or 3rd year), staff at dental school or course for dentists. For comparison between the different groups, Fisher's exact test was used with a two‐sided 5% significance level found at https://www.graphpad.com . Participants A total of 248 individuals (74 men and 173 women) who studied or worked within the area of dentistry were recruited on a voluntary basis. This number included 121 dental students, all of whom were studying at the Institute of Odontology at Sahlgrenska Academy, University of Gothenburg, Sweden. They had reached various training levels, with 49 in the first year, 29 in the second, and 43 in the third. Seventy‐four participants were drawn from the staff at the Institute of Odontology: 32 dentists, 7 dental hygienists, 32 dental nurses, and 2 people in administration and reception. The 53 remaining participants consisted of general dentists, both private and public employees, who were attending a course in endodontics at the Gothenburg Dental Society. Questionnaire Two variants of a questionnaire were created, each designed to cause a respondent to answer from his or her perspective as a potential patient. The description of the clinical decision‐making situation was simple and patient‐oriented. Although the clinical situation and information were identical in both questionnaires, the two alternative treatment options were systematically framed in two different ways. The clinical situation was described as follows: Imagine that 5 years ago you were involved in a bicycle accident in which you injured your upper left central tooth. As a result, the tooth needed root canal treatment. It was opened up, cleaned of bacteria, and filled with a rubber‐like material. Finally, it was sealed with a plastic filling. Since then, you have not experienced any problems with this tooth . When you come to your dentist for your annual routine checkup today, the dentist decides to take a radiograph of the tooth. This shows a lesion in the bone around the tip of the tooth . On the radiograph, you can see that the bone around the root tip of the tooth is a little darker. This indicates that there are bacteria left inside the root canal that cause inflammation, which in turn becomes visible on the radiograph. The root filling appears to be short and incomplete . Your dentist presents two options. You have free dental care, so any treatment you choose is free of charge . In the first variant of the questionnaire, the intention was to frame the options in favor of refraining from retreatment now and of waiting and seeing (FW). Option A (Wait) . Refrain from retreatment now and wait and see. The chance that the tooth will be asymptomatic for the rest of your life is approximately 90%. The chance that any remaining infection will have no negative effect on your overall health is more than 99% . Option B (Retreat) . Retreatment, which involves remaking the root‐canal treatment so that the root filling becomes dense and is the correct length. Despite retreatment, the risk that the inflammation will not heal is approximately 25% . In the second variant of the questionnaire, the intention was to frame the options in favor of retreatment (FR). Option A (Wait) . Refrain from retreatment now and wait and see. The risk that the tooth later will become symptomatic in the form of pain and/or swelling that requires treatment is approximately 10%. The risk that any remaining infection has a negative effect on your overall health is less than 1% . Option B (Retreat) . Retreatment, which involves remaking the root‐canal treatment so that the root filling becomes dense and is the correct length. The chance that the inflammation will heal is approximately 75% . At the end of the questionnaire, respondents were requested to register their gender, age, and occupation, or, if they were dental students, to state their year of study. The two variants of the questionnaire are presented in Figure . Figure 1 (a and b) The two different versions of the questionnaire (a = favoring wait and see [FW] and b = favoring retreatment [FR]) distributed to the participants in the study. Distribution and procedures The same number of copies (150) of questionnaire variants FW and FR were printed and sorted into a stack in which FW consistently alternated with FR. The questionnaires were distributed on five different occasions. First, two of the authors (Agnesa Smakiqi and Daniela Henelius) gave a short introduction in which the participants were told, that the questionnaire was supposed to provide the basis for a Master's thesis on clinical decision‐making in root‐filled teeth. Participants were also informed that reading and answering would require no more than 10 min, that all answers would be anonymous, and that as participation was completely voluntary, the questionnaire could also be returned unanswered. Participants were asked not to communicate with each other when completing the questionnaire. The real purpose of the study was concealed from the participants, as was the fact that two different variants of the questionnaire would be distributed. The stack of questionnaires was distributed, with each participant receiving only one questionnaire. The questionnaire was distributed to the students in the lecture hall during a selected lecture and to the staff at the Folktandvården Education Clinic for Dentistry during a staff meeting. These questionnaires were all collected immediately. Distribution of the questionnaire among dentists took place during an evening course organized by Gothenburg Dental Association (GTS). These participants were required to submit the questionnaire either immediately, or by post in a prestamped letter to the Department of Endodontology. The information from each questionnaire was then transferred to an Excel data sheet (Microsoft Corp). Ethical considerations This study was originally a part of a master's thesis at the Institute of Odontology, Sahlgrenska Academy, University of Gothenburg, Sweden. No patient or patient data was involved in the study, except for an anonymous radiograph, of which the patient had given consent to be used for the purpose. All responders to the questionnaire were informed that answering would be anonymous, and that participation was completely voluntary, the questionnaire could be returned unanswered without registration. Statistical methods Before the statistical analysis, participants were divided by gender and into three groups on the basis of age: 18–25, 26–49, and 50+. We also categorized the respondents as follows: students and of their training level (1st, 2nd, or 3rd year), staff at dental school or course for dentists. For comparison between the different groups, Fisher's exact test was used with a two‐sided 5% significance level found at https://www.graphpad.com . RESULTS A total of 248 individuals participated in our study, 141 of whom (56.9%) chose retreatment and 107 of whom (43.1%) chose the wait‐and‐see option. One hundred and twenty‐five participants (50.4%) had received the questionnaire variant framed in favor of refraining from treatment (wait‐and‐see) (FW), and 123 participants (49.6%) had received the variant framed in favor of retreatment (FR). Whereas 69 (55.2%) of the participants who had received questionnaire FW chose the option to refrain and wait, 56 (44.8%) chose retreatment. In contrast, whereas 38 (30.9%) of the participants who had received questionnaire FR chose to refrain and wait, 85 (69.1%) chose retreatment. This difference was statistically significant ( p = .0002) (Table .) Seventy‐four (30%) of the participants were men and 173 (70%) were women. When the possible framing effect was analyzed on the basis of gender, a statistically significant framing effect ( p = .0004) was found among women. In men, a framing effect was registered numerically, but the difference was not statistically significant ( p = .20) (Table ). One hundred and eight participants (45%) were in the 18–25 age group; 45 (18%) in the 26–49 age group; and 89 (37%) in the 50+ age group. A statistically significant framing effect ( p = .020) was detected in the 18–25 age group. In the 26–49 and 50+ age groups, the framing effect was not statistically significant (Table ). The results were also analyzed on the basis both of occupational category and of the occasion on which the questionnaire was distributed. A framing effect was observed regardless of the category but reached statistical significance only among 3rd year students ( p = .016) (Table ). DISCUSSION The results of this study show that a framing effect can be expected to play a role in endodontic retreatment decision‐making. The pooled data from all respondents showed a statistically significant effect. A similar effect was seen when the respondents were divided into subgroups categorized by gender, age, occupation, and occasion. However, the effect was not statistically significant in all analyses. Even if it cannot be ruled out that in some groups of respondents, no framing effect is present, the probable reason is that some of the subgroups were not large enough, thereby causing a statistical Type II error. Future research projects could aim to evaluate whether the framing effect is greater, less, or not at all demonstrable in different groups of potential decision‐makers depending on gender, age, level of education, or other group‐defining characteristics. The explanation for the framing effect can be found within the framework of the prospect theory (Kahneman & Tversky, ). This theory, which essentially concerns economic behavior, challenged the idea of rationality among decision‐makers as it had explicitly been formulated in the expected utility theory (Von Neumann & Morgenstern, ). The prospect theory, which was based on results from controlled studies, describes how individuals assess their loss and gain perspectives in an asymmetric manner. The theory assumes that there are two different phases of decision‐making. In the first phase, the alternatives stated are automatically evaluated, a process that involves analysis and simplification of the information they contain. In the second phase, the decision‐maker considers the alternatives and chooses the one he or she judges to be most beneficial (Kahneman & Tversky, ). When a choice is being made between two options, an alternative that is described in its entirety positively seems to be preferable to one that is in itself described negatively, even though both alternative descriptions state exactly the same factual information. In a classic study by McNeil et al. ( ), it was investigated how variations in the way in which information was presented influenced the choices made by ambulatory patients, graduate students, and physicians when deciding between alternative therapies—radiation or surgery—in cases of lung cancer. Different groups of respondents received input data that differed according to whether the treatment outcomes were framed in terms of the probability of living or the probability of dying. In all three groups of responders, the attractiveness of surgery relative to radiation therapy was greater when the problem was framed in terms of the probability of living rather than in terms of the probability of dying. In our study involving the root‐filled tooth with AP, the factually identical information on the “wait and see” and “retreatment” options were framed either by using the word “chance” or the word “risk” to indicate the probability (likelihood) of outcomes. However, the connotations of the words “risk” and “chance” are essentially different (Li et al., ; Morizot, ). While chance has a positive connotation (the likelihood of something good happening), risk has a negative connotation (the likelihood of something bad happening). By using a more neutral word such as “probability” or “likelihood,” a clinician presenting prognostic assumptions about a clinical option could possibly reduce the framing effect. To enhance the framing effect in our experiment, we combined the word “chance” with “healing” (a positively laden expression), and “risk” with “nonhealing” (a negatively laden expression). Similarly, a statement of a 90% likelihood of success highlights the attractive outcome of a procedure, whereas a 10% likelihood of failure tends to highlight the unattractive outcome. Partly because of the strongly value‐laden component of the “success” and “failure” classifications, various authors have suggested alternative systems and terms to evaluate and classify the outcome of root canal treatment (Friedman & Mor, ; Messer & Yu, ; Wu et al., ). Language evidently plays an important role in many aspects of medicine and healthcare and may be used as a powerful tool in clinical decision‐making situations (Srivastava, ). Thus, the clinician presenting the information to the patient may, consciously or unconsciously, influence the patient's choice in favor of a particular option. In the absence of strong scientific evidence for the benefits or harms of a particular choice, it may also be assumed that the clinician's framing of the options and how presenting them to the patient, is influenced by different heuristic biases concerning probability (Hicks & Kluemper, ; Reit et al., ; Tversky & Kahneman, ). Referred to as “availability,” defines the phenomenon whereby people assess the frequency of a class or the probability of an event on the basis of the ease with which instances or occurrences can be brought to mind (Tversky & Kahneman, ). For example, the influence of availability may be expected when the retreatment and wait‐and‐see options for new patients with an asymptomatic AP are framed by a dentist who recently met a patient with a flare‐up in a root‐filled tooth. In particular, this is to be expected if, in any aspect, the present case resembles the recent experience of a patient in severe pain, as explained by the principle of “representativeness” (Hicks & Kluemper, ; Tversky & Kahneman, ). Representativeness is defined as a heuristic bias that occurs when the similarity of objects or events confuses people's thinking regarding the probability of an outcome. Quite apart from the heuristic reasoning about the probability that may influence dentists’ expectations and preferences, their clinical choices may even be affected by their prejudices about their patients (Patel et al., ). An interesting finding in our study was that even though a framing effect was evident at the group level, both options, regardless of the variant of the questionnaire, were chosen rather frequently. This finding indicates that other factors than how the information was framed are important for the respondent's choice. This is consistent with previous studies on the subject of endodontic retreatment decision‐making (Kvist, ; Reit & Kvist, ). The praxis concept theory as regards endodontic retreatment decision‐making was proposed by Kvist et al. ( ). At its core is the hypothesis that interindividual variation in decision‐making on endodontic retreatment can largely be explained by variation in the values of the individual decision‐makers. The observations in this study, which involved respondents with varied experiences and backgrounds, do not falsify this theory. To explain the origin of the various values involved in the endodontic retreatment decision‐making process, it is assumed that, somehow, an individual's values are at least partly a merged mental disposition developed by experience from different environments (Kvist, ; Kvist & Reit, ; Kvist et al., ; Taha et al., ). For example, it has been shown that, in endodontic retreatment decision‐making situations, endodontists systematically make decisions differently than students, general dental practitioners, or specialists in other disciplines (Bigras et al., ). Any acknowledgment of the framing effect challenges the concept of patient autonomy and informed consent. By consciously or unconsciously choosing value‐laden words and framing different treatment options, a therapist will influence their patient's choice, intentionally or otherwise. Only if the dentist is aware of this problem, and consciously attempts to provide information in ways that are as neutral as possible will he or she be able to reduce this effect. On the other hand, the power of the framing effect in clinical decision‐making may also be deliberately used in situations to influence patients to make the “right” decision—“right” in the sense that there are good reasons and good evidence to believe that a certain decision is in the patient's best interests. Sherman et al. ( ) and Patel et al. ( ) showed how the framing effect was used to encourage the use of dental floss: those who received information in the form of a profit‐framed video were more likely to use dental floss according to the recommendations for a period of 6 months than those who saw a loss‐framed video. This study makes no claim to fully chart the framing effect of clinical decision‐making in connection with root‐filled teeth. As the respondents were not selected on the basis of belonging to nonprofessional groups and as all had some kind of affiliation to dentistry, its external validity can be questioned. It may also be argued that the factual information—that is, the percentages regarding healing, the likelihood of becoming symptomatic, and the influence of any remaining infection and inflammation on systemic health—is not based on the best available evidence. However, it was not our purpose to systematically review the best current evidence on the matter. Instead, our purpose was to apply the phenomenon of framing to a well‐known clinical decision problem within endodontics, to provide some empirical support, and to discuss various questions that arose, and also their implications. CONCLUSION A framing effect is likely to play an essential role in endodontic retreatment decision‐making of root‐filled teeth with asymptomatic AP. Thomas Kvist, Daniela Henelius, and Agnesa Smakiqi all made substantial contributions to the conception and design of the study. Daniela Henelius and Agnesa Smakiqi were responsible and involved in data collection. Thomas Kvist, Daniela Henelius, and Agnesa Smakiqi were all involved in data interpretation, statistical analyses, drafting, and critically revising the manuscript. All authors have given final approval for the version to be published. The authors declare no conflict of interest.
Australian Dentists' Knowledge of the Consequences of Interpretive Errors in Dental Radiographs and Potential Mitigation Measures
2ae80778-6ca2-4654-98d4-9cda8b1c639d
11486910
Dentistry[mh]
Introduction A great deal of attention and effort is dedicated to ensuring the provision of safe and high‐quality care to patients. Nevertheless, mistakes arise from radiographic interpretive errors, leading to unintentional patient harm caused by medical management rather than the patient's underlying condition (Kohn, Corrigan, and Donaldson ). Interpretive errors are a type of diagnostic error that has been extensively studied in medical radiology (Degnan et al. ; Itri et al. ; Waite et al. ). Although interpretive errors in dental radiographs have not been as widely studied as in medical radiology, evidence suggests that they occur in dental practice, and several factors influence their occurrence (Hegde et al. ). These factors include time pressure, stress, clinical experience, case complexity, and cognitive load (Bretherton, Chapman, and Chipchase ; Cave and Hutchison ; Hegde et al. ; Plessas et al. , ). The severity of consequences arising from interpretive errors varies and depends on factors such as the type of error, the stage of the diagnostic process, and the promptness of identification and correction (Balogh et al. ). These errors can lead to adverse outcomes that can be classified as patient‐related or clinician‐related consequences, potentially resulting in unnecessary treatments, malpractice lawsuits, damage to the clinician's reputation and credibility, increased financial burdens for both the clinician and patient, and a negative impact on the patient's quality of life (Nikdel et al. ). Among the consequences of interpretive errors in medical imaging, malpractice lawsuits have been particularly significant (Busardò et al. ). Although extreme consequences of interpretive errors in dentistry are unlikely, it is essential to acknowledge and mitigate these errors to ensure quality patient care and minimize associated consequences. Furthermore, the context of dental imaging, radiographic interpretation, and diagnosis is distinct in dentistry compared to medicine. General dentists are responsible for capturing and interpreting dental plain film radiographs, whereas in medicine, specialists perform most imaging and interpretation of medical radiographs. This distinction could lead to consequences of interpretive errors that are unique to dentistry. Given their role in initial radiographic screening, general dentists play a crucial role in the early detection and diagnosis of dental conditions (Rushton, Horner, and Worthington ; Zeichner, Ruttimann, and Webber ). With advances in diagnostics and healthcare technologies, healthcare is poised to minimize errors and improve diagnostic accuracy and quality of patient care. However, there is no single solution to the problem of interpretive errors. In medical imaging, several approaches have been applied to improve accuracy in radiographic diagnosis. These include an emphasis on critical analysis and clinical decision‐making in medical education (Morrissey and Heilbrun ), the use of simulation in teaching diagnostic methods (Solomon et al. ), diagnostic checklists for structured analysis of medical images (Ely, Graber, and Croskerry ), and improvements in technology such as automated clinical decision support (Oakden‐Rayner ) and machine learning (ML) algorithms (Choy et al. ). Using technology can be advantageous in minimizing interpretive errors as it can enhance human performance. Peer review with continuous and critical reflection of physicians' performance using structured procedures has also been shown to improve diagnostic accuracy (Lee et al. ). Active and effective management of interpretive errors requires the development of a system for identifying and managing these errors. Awareness and knowledge of the frequency and the causes are prerequisites for identifying interpretive errors as and when they occur in clinical practice. These steps will enable the implementation of effective measures to prevent such errors from occurring in the future. This survey aims to gather dental practitioners' views on the impact of interpretive errors on both themselves and their patients. This study also aims to assess dental practitioners' perceptions of the currently available measures to mitigate such errors in clinical practice. Materials and Methods A cross‐sectional, anonymized survey of dentists practising in New South Wales, Australia, was undertaken. This survey was conducted as a part of a broader study that assessed the perceptions of Australian dental practitioners about interpretive errors in dentistry (Hegde et al. ). This study was approved by the University of Sydney Human Research Ethics Committee (Approval Number 2020/336) and conducted in accordance with the Declaration of Helsinki (World Medical Association ). The target population included dental practitioners within the research team's network, subscribers to the ADA's e‐newsletters, users of social media platforms, and attendees of dental continuing professional development (CPD) workshops. An anonymized link to the survey was created and made available to potential participants via emails, e‐newsletters, social media platforms, and during CPD workshops. Informed consent was assumed when the participants completed and submitted the survey. The survey was conducted in two phases: the first from September 2020 to July 2021, and the second from January to March 2022, with the latter aimed at increasing participant numbers. Regular monitoring of the data during these periods allowed for the observation of response trends. Data saturation was determined manually, evidenced by the lack of new information arising from subsequent responses, a method analogous to the data saturation techniques employed in qualitative studies. The survey concluded once it was established that data saturation had been achieved, in line with the principles of grounded theory (Glaser and Strauss ). The online questionnaire was developed and distributed on the Qualtrics platform ( www.qualtrics.com ). After conducting an extensive review of literature pertaining to interpretive errors in radiology and clinical decision‐making, as well as consulting experts in radiology and survey design, a questionnaire was created. This questionnaire aimed to explore how participants perceived the consequences of interpretive errors in dental radiographs and to explore their views on measures to minimize interpretive errors in clinical dentistry. The questionnaire comprised several categories, including questions about the consequences of interpretive errors for both clinicians and patients and measures to reduce such errors in dentistry. Demographic data gathered included participants' location and type of practice, the type and the number of radiographs taken in their dental practice each day, and the type of intra‐oral imaging system used in their practice. The survey questionnaire is included in Supporting Information S1: File , with questions in Sections 5, 6, and 7 addressing this study. These sections included questions about the consequences of interpretive errors, potential solutions to minimize errors, and participant demographic data (Supporting Information Material). Data were exported from the Qualtrics platform to SPSS statistical software (Version 26, IBM, SPSS, Chicago, IL) for analysis. Cronbach's ⍺ was used to measure the internal consistency of the survey instrument. Descriptive statistics were used to summarize the demographic data of participants. Chi‐squared tests of independence were used to find the association between categorical variables. Fisher's test was used when more than 20% of the cells had an expected frequency of less than 5%. Statistical comparisons were not conducted when expected cell counts were less than 1. The responses between different demographic groups, such as gender, type of practice (public or private), and level of training (general dentist or specialist), were compared. Spearman's rank correlation was used to test the correlations between variables. The statistical significance was set at p < 0.05. Results This survey received 118 responses, of which 38 records were excluded from the analysis due to missing data. The survey instrument had a high internal consistency with Cronbach's ⍺ value of 0.928. The significant findings from the survey are presented here to highlight meaningful trends and insights. 3.1 Demographic Features A summary of the demographic characteristics of the survey participants is presented in Table . The participants' ages ranged from 23 to 72 years. The reported clinical experience ranged from 1 to 50 years, averaging 17.1 (±11.8) years. On average, the participants reported taking 12 radiographs a day: four periapical radiographs, five bitewings, and three OPGs. The participants reported taking a range of radiographs, and most of them used the photostimulable phosphor (PSP) imaging system. 3.2 Consequences of Interpretive Errors Affecting Patients and Clinicians Among the patient‐related consequences, the participants recognized undertreatment (72%) as the most significant, followed by an increase in the financial burden to the patient (62%). In contrast, severe harm or patient mortality (30.7%) was considered the least likely outcome (Figure ). For clinician‐related consequences, participants identified legal consequences (82%) as the most likely ones. Loss of reputation (75.6%) was also recognized as a significant consequence, whereas the additional cost to the clinician (52.7%) was considered as the least likely consequence (Figure ). Statistically significant correlations were observed between patient‐related and clinician‐related consequences (Table ). Additional cost to the dentist was the only one that correlated to all patient‐related and clinician‐related consequences with statistical significance ( p < 0.001). Similarly, additional cost to the patient had a moderate correlation to all other patient‐related consequences ( p < 0.001). Interestingly, there was a strong positive correlation between the loss of a clinician's reputation and the loss of a client due to interpretive errors ( r = 0.770, p < 0.001). 3.3 Gender and Professional Qualification Variations in the Perceptions of the Consequences of Interpretive Errors Interestingly, female and male participants had different opinions about the likelihood of patient‐related and clinician‐related consequences of interpretive errors. More female participants (78%) considered undertreatment as likely patient‐related consequences of interpretive errors compared to male participants (63.6%). In contrast, more male participants considered patient harm or mortality (females, 40%; males, 30%) and negative impact on patient's quality of life (females, 40%; males, 28%) as the likely results of interpretive errors. When the clinician‐related consequences were compared, more female participants (84.8%) considered that legal consequences were more likely consequences compared to the male participants (77.4%). Female participants also considered loss of client (78.4% for females and 40.9% for males, p = 0.023) and loss of reputation (females, 82.5%; males, 54.6%; p = 0.039) were more likely to occur than male participants. When the responses of general dental practitioners (GP) were compared with the specialists, it was found that more GPs (74.3%) considered that undertreatment was the most likely patient‐related outcome compared to specialist dentists (71.4%). More specialists (71%) thought that additional cost to the patient was a possible outcome compared to the GPs (60%). When the responses regarding the clinician‐related consequences were compared, trends were revealed. More specialists (71%) considered a hike in insurance premiums was likely to occur due to interpretive errors compared to GPs (54%). In contrast, more GPs (65.7%) considered loss of client as a more likely consequence of interpretive error compared to specialists (57%). For each patient‐related and clinician‐related consequence, dentists practising in public practice (hospital‐based) considered them more likely to occur than dentists in private practice. 3.4 Measures to Minimize Errors Participants identified that using high‐quality images for diagnosis (63.9%) and prescribing appropriate radiographs based on the patient's chief complaint (59.7%) were the most effective means of minimizing interpretive errors. Undergoing training and education to enhance diagnostic skills (54.2%) and discussing a case with a colleague (45.8%) were also recognized as effective measures. Interestingly, although the use of ML algorithms was identified as the least effective measure (11.1%) for reducing the incidence of interpretive errors in dentistry (Figure ), it was observed to correlate with measures related to cognitive aids such as standardized checklists ( p < 0.001) (Table ). 3.5 Gender Perspectives Regarding Measures to Minimize Interpretive Errors Although there were no significant gender differences in the responses, more male participants (72.7%) considered using high‐quality radiographs as the essential measure to minimize interpretive errors compared to female participants (60.8%). More male participants (68%) considered prescribing appropriate radiographs a more effective error‐minimizing measure than females (58.6%). However, more female participants (43%) considered comparing new radiographs with old ones as an effective measure compared to males (36%). There were no differences in the responses between GPs and specialists and between dentists in public and private practices. Demographic Features A summary of the demographic characteristics of the survey participants is presented in Table . The participants' ages ranged from 23 to 72 years. The reported clinical experience ranged from 1 to 50 years, averaging 17.1 (±11.8) years. On average, the participants reported taking 12 radiographs a day: four periapical radiographs, five bitewings, and three OPGs. The participants reported taking a range of radiographs, and most of them used the photostimulable phosphor (PSP) imaging system. Consequences of Interpretive Errors Affecting Patients and Clinicians Among the patient‐related consequences, the participants recognized undertreatment (72%) as the most significant, followed by an increase in the financial burden to the patient (62%). In contrast, severe harm or patient mortality (30.7%) was considered the least likely outcome (Figure ). For clinician‐related consequences, participants identified legal consequences (82%) as the most likely ones. Loss of reputation (75.6%) was also recognized as a significant consequence, whereas the additional cost to the clinician (52.7%) was considered as the least likely consequence (Figure ). Statistically significant correlations were observed between patient‐related and clinician‐related consequences (Table ). Additional cost to the dentist was the only one that correlated to all patient‐related and clinician‐related consequences with statistical significance ( p < 0.001). Similarly, additional cost to the patient had a moderate correlation to all other patient‐related consequences ( p < 0.001). Interestingly, there was a strong positive correlation between the loss of a clinician's reputation and the loss of a client due to interpretive errors ( r = 0.770, p < 0.001). Gender and Professional Qualification Variations in the Perceptions of the Consequences of Interpretive Errors Interestingly, female and male participants had different opinions about the likelihood of patient‐related and clinician‐related consequences of interpretive errors. More female participants (78%) considered undertreatment as likely patient‐related consequences of interpretive errors compared to male participants (63.6%). In contrast, more male participants considered patient harm or mortality (females, 40%; males, 30%) and negative impact on patient's quality of life (females, 40%; males, 28%) as the likely results of interpretive errors. When the clinician‐related consequences were compared, more female participants (84.8%) considered that legal consequences were more likely consequences compared to the male participants (77.4%). Female participants also considered loss of client (78.4% for females and 40.9% for males, p = 0.023) and loss of reputation (females, 82.5%; males, 54.6%; p = 0.039) were more likely to occur than male participants. When the responses of general dental practitioners (GP) were compared with the specialists, it was found that more GPs (74.3%) considered that undertreatment was the most likely patient‐related outcome compared to specialist dentists (71.4%). More specialists (71%) thought that additional cost to the patient was a possible outcome compared to the GPs (60%). When the responses regarding the clinician‐related consequences were compared, trends were revealed. More specialists (71%) considered a hike in insurance premiums was likely to occur due to interpretive errors compared to GPs (54%). In contrast, more GPs (65.7%) considered loss of client as a more likely consequence of interpretive error compared to specialists (57%). For each patient‐related and clinician‐related consequence, dentists practising in public practice (hospital‐based) considered them more likely to occur than dentists in private practice. Measures to Minimize Errors Participants identified that using high‐quality images for diagnosis (63.9%) and prescribing appropriate radiographs based on the patient's chief complaint (59.7%) were the most effective means of minimizing interpretive errors. Undergoing training and education to enhance diagnostic skills (54.2%) and discussing a case with a colleague (45.8%) were also recognized as effective measures. Interestingly, although the use of ML algorithms was identified as the least effective measure (11.1%) for reducing the incidence of interpretive errors in dentistry (Figure ), it was observed to correlate with measures related to cognitive aids such as standardized checklists ( p < 0.001) (Table ). Gender Perspectives Regarding Measures to Minimize Interpretive Errors Although there were no significant gender differences in the responses, more male participants (72.7%) considered using high‐quality radiographs as the essential measure to minimize interpretive errors compared to female participants (60.8%). More male participants (68%) considered prescribing appropriate radiographs a more effective error‐minimizing measure than females (58.6%). However, more female participants (43%) considered comparing new radiographs with old ones as an effective measure compared to males (36%). There were no differences in the responses between GPs and specialists and between dentists in public and private practices. Discussion The survey aimed to assess the dental practitioners' perceptions about the impact of interpretive errors in dentistry and potential solutions to minimize the occurrence of such errors. One notable observation was that the participants generally did not perceive interpretive errors led to serious harm to the patient. Instead, the survey revealed that dental practitioners placed greater emphasis on the potential consequences of interpretive errors in their practice, particularly in terms of the loss of reputation and financial consequences. This finding suggests that practitioners were more concerned about the impact of errors on their professional standing and the potential negative effects on their relationships with patients and colleagues. The emphasis on loss of business and reputation as significant consequences of interpretive errors highlights the importance that dental practitioners place on maintaining a high level of competence and accuracy in their diagnostic and interpretive abilities. It underscores the potential impact that errors can have on patient trust and satisfaction and the potential for negative word‐of‐mouth and reputational damage. These findings contrast with medical radiology, where malpractice lawsuits are the biggest concern as a consequence of interpretive errors (Berlin ; Biswas et al. ). Other consequences of interpretive errors in medical radiology include repeat imaging, delayed or inappropriate treatment, avoidable hospital admissions, and increased mortality (Ahn et al. ; Onder et al. ). Overall, dental and medical practitioners share a common goal of minimizing interpretive errors. However, the concerns of dental practitioners are distinct from those of medical practitioners, which could be attributed to the unique contexts and priorities within each healthcare domain, such as variations in patient expectations, scope and impact of treatment provided, clinical practice dynamics, and legal considerations. In countries like Australia, dental radiology diagnosis is mainly performed by general dentists. Admittedly, the findings of this study may not be generalizable to those countries with a higher number of DMFR specialists who provide radiological diagnosis and reporting. However, it is important to note that because general dentists perform routine dental images such as intraoral and panoramic radiographs and CBCT images, they are in a crucial position to detect jaw pathologies and malignancies at an early stage (MacDonald and Yu ; Price et al. ). Early detection of these conditions is essential to prevent disease progression, extensive destruction of dental supporting structures, and potential patient morbidity (Choi ). Therefore, targeted training and improving the diagnostic skills of general dentists can help minimize missed lesions, particularly because early‐stage diseases are often asymptomatic and may be identified incidentally on dental radiographs. The survey also provided valuable insights into the interrelationships among the various consequences of interpretive errors. Participants considered that overtreatment negatively impacted their patients' quality of life and resulted in additional financial burdens on the clinician, legal consequences, and loss of the clinician's reputation. This highlighted the participant's awareness that negative patient outcomes were linked to financial and legal consequences for the clinicians. The participants' perceptions of measures to minimize interpretive errors were also analyzed in this survey. The findings emphasize the significance of utilizing high‐quality images (taken using the correct technique and exposure settings and having optimal contrast, density, and brightness) and choosing appropriate radiographs based on patient symptoms. The impact of image quality has been investigated in medicine and dentistry (Boita et al. ; Bruno, Walker, and Abujudeh ; Ghazali, Mohd Yusof, and Norman ). These practices are crucial for improving diagnostic accuracy and reducing interpretive errors in dentistry. However, it is essential to consider that these higher quality images may not always align with the “As Low As Reasonably Achievable” (ALARA) principle, which prioritizes minimizing patient exposure. Further studies are needed to develop imaging protocols for specific dental conditions that balance the diagnostic efficacy with minimum radiation dose. Studies have shown that there is a greater likelihood of interpretive errors with 3D imaging compared to 2D imaging (Williams and Drew ). However, this survey did not study the impact that 2D versus 3D imaging may have on the incidence and consequences of interpretive errors. Among the different measures to minimize interpretive errors, the participants also recognized training, education, and consulting with peers or colleagues as the most effective strategies. This emphasizes the importance of CPD and recognizes the value of collaboration and seeking input from others to enhance the accuracy of diagnosis and mitigate individual biases or oversights. Previous studies have shown that CPD can enhance clinical decision‐making skills and patient outcomes (Firmstone et al. ). The survey results could inform the design of training programs and implement quality assurance measures to minimize interpretive errors, improve patient care, and safeguard the professional reputation of dental clinicians. Interestingly, the use of ML algorithms as cognitive aid was rated as the least effective measure for reducing the occurrence of interpretive errors, potentially due to scepticism about ML in dentistry. Studies have suggested that this scepticism may arise from concerns such as the interpretability and explainability of ML algorithms (Antoniadi et al. ). In addition, there are concerns that automated algorithms cannot fully replace or replicate the nuanced decision‐making capabilities of experienced dental professionals (Ding et al. ). There may also be concerns about patient privacy (Pethani ), liability, and responsibility for decisions made by the automated ML algorithms (Naik et al. ). It is important to note that this survey was conducted before the wider applications of ML algorithms and the development of large language models using generative AI. Given the advancements in technology since then, it is reasonable to consider that the perceptions of ML algorithms may have evolved if the survey were conducted in the present time. The study's limitations include a small sample size, which may preclude the generalizability of the survey results. However, similar low participant numbers were noted in other Australian surveys of dental practitioners (Bulmer et al. ; George et al. ; Jadav and Abbott ). In addition, the low GP to dental specialist ratio may have limited the interpretation of the results in this survey, especially when comparisons were made between these two groups. Although the survey responses provided valuable insights, not all questions were answered by every participant. Future research will aim to validate the suggestions provided by the participants and explore the correlation between participant's perceptions about interpretive errors and their age and clinical experience. This correlation was not examined in this study due to the low sample size, which resulted in fewer participants in each age and clinical experience category. Furthermore, as this survey was voluntary, this may have resulted in selection bias. Conclusions The survey findings highlight the perceptions and beliefs of dental practitioners regarding the consequences of interpretive errors in dental radiology and potential solutions to minimize their occurrence. The results provide valuable insights to develop targeted interventions based on the clinician's level of training, gender, and sector of practice. Efforts to minimize interpretive errors should not only focus on improving patient safety but also address the concerns and priorities of dental practitioners, including the preservation of their professional reputation and business viability. However, there remains scepticism about the effectiveness of ML algorithms in dentistry. Future research is needed to explore the evolving perceptions of ML algorithms in dentistry and address these concerns effectively. Shwetha Hegde has made substantial contributions to conceptualization, design, project administration, investigation and formal analysis, visualization, writing the original draft, and revising the manuscript. Jinlong Gao is involved in methodology, feedback on the concept, data analysis and writing (review and editing), and visualization. Stephen Cox participated in methodology, feedback on the concept, and writing (review and editing). Shanika Nanayakkara contributed to data curation and formal analysis, visualization, and writing (review and editing). Rajesh Vasa contributed to methodology, feedback on concepts, and writing (review and editing). The authors declare no conflicts of interest. Supporting information. Supporting information. Supporting information.
Primiparous women's knowledge and satisfaction based on their attendance at childbirth preparation classes
db930b09-c618-48de-8a3b-63a116ef5f6f
8363339
Patient Education as Topic[mh]
INTRODUCTION Women prefer caesarean birth due to fear, anxiety and lack of knowledge about labour and the pain associated with natural birth (Masoumi et al., ; Aksoy et al., ). According to World Health Organization (WHO), the ideal rate for caesarean birth is between 10% and 15% (Betran et al., ). However, the number of caesarean births worldwide is increasing every year (Aksoy et al., ; Betran et al., ; Storksen et al., ). In Iran, the prevalence of caesarean birth has increased, reaching an estimated rate of 48% in recent years (Azami‐Aghdash et al., ). Based on a systematic review, the factors influencing high caesarean birth rates in Iran were divided into three categories: (a) socio‐demographic factors, (b) obstetric‐medical causes and (c) non‐obstetric‐medical causes. Maternal education, grand multiparity, having a previous caesarean birth and fear of normal vaginal birth were major factors for having a caesarean sections. The most important reasons for women preferring caesarean birth without medical indications were the fear of childbirth and feeling intolerant towards labour pain. (Azami‐Aghdash et al., ). Currently, international policies reinforce promoting vaginal births through prenatal education. Therefore, different approaches have been used to encourage women to give birth vaginally (Cosans, ). For instance, Dick Read advocated education and emotional support for women during childbirth to reduce their fear and break the fear‐tension‐pain cycle (Dick‐Read, ). Another approach was Lamaz's philosophy that emphasized the importance of vaginal childbirth and educating women towards an active birth without unnecessary interventions (Lamaz, ). Some women who give birth for the first time are more stressed about adapting to their new role as a mother and their responsibilities about caring for the baby. These women are more likely to attend childbirth preparation classes (Jakubiec et al., ). Childbirth education can be helpful for pregnant women and their families to develop their birth plan (Bailey et al., ); make decisions about labour and birth; choose pain management methods; and prepare for breastfeeding and parenting (Gokce Isbir et al., ; Simpson et al., ; Stoll & Hall, ). Childbirth preparation classes also prepare women (Firouzbakht et al., ; Pinar et al., ) to identify unexpected complications that may contribute to maternal mortalities such as gestational hypertension, postpartum haemorrhage and infection (Malata et al., ). Evidence demonstrated that women's knowledge on childbirth reduces their anxiety (Hassanzadeh et al., ); increases their confidence; and enhances the desire for vaginal birth (Aksoy et al., ; Gokce Isbir et al., ). In a study of 132 participants, women who attended childbirth classes regularly had more knowledge on childbirth, were confident in their ability to give birth and were involved in the shared decision‐making process (Pinar et al., ). In another study, women with childbirth education managed labour pain better, used less medications during labour and had less instrumental birth incidences (Firouzbakht et al., ; Gluck et al., ). Attending childbirth classes was also associated with a positive relationship with the medical staff and more success in breastfeeding (Gao et al., ; Lin et al., ). Assessing patients’ satisfaction is one of the common components of healthcare services (Kamil Dhahi et al., ). Therefore, women's satisfaction about childbirth preparation classes is as important as their knowledge. Women's low satisfaction is associated with delay in seeking medical advice and poor understanding of medical conditions (Wilkin et al., ). Although the impact of childbirth education on women's knowledge has been reported in different studies, women's satisfaction with childbirth preparation classes has not been systematically examined (Lee & Holroyd, ; Lin et al., ). In Iran, childbirth preparation classes are voluntary, free and run in eight 90 min sessions. Trained midwifes deliver the classes based on standard content that is set by the Ministry of Health (Table ) (IRI Ministry of Health & Medical Education, ). Despite holding childbirth classes since 2008, we have not found any study that examined the knowledge and satisfaction of women who attended the childbirth preparation classes in Iran. This study was aimed to compare primiparous women's childbirth knowledge and satisfaction between regular and irregular attenders of childbirth preparation classes. METHODS This study was part of a mixed method study which was approved by the Ethics Committee of Tabriz University of Medical Sciences (IR.TBZMED.REC.1398.066). This paper reflects the cross‐sectional part of the study of 136 primiparous women with gestational ages of 35–37 weeks. The protocol has already been published (Hassanzadeh et al., ). This study was conducted during a 9‐month period between July 2019 and March 2020. Women were grouped based on their attendance of childbirth preparation classes: regular attenders (attending four to eight sessions) and irregular attenders (attending one to three sessions). According to IRI Ministry of Health and Medical Education directive policy, women who attend only 1 to 3 sessions of these classes are not considered trained. Therefore, they were classified as “Irregular attenders” in this study. The inclusion criteria consisted of living in the city of Tabriz, being a primiparous woman with a gestational age of 35–37 weeks. The exclusion criteria were multiparity, multiple gestations, preterm births, non‐cephalic presentation, history of depression, having stressful family events and obstetric problems. The sample size was calculated based on the knowledge variable and using G‐Power software. According to the results of a study by Pinar et al. (Pinar et al., ), average knowledge score in the control group was 8.85 (m 1 ), assuming a 30% increase in knowledge score due to regular participation in class, the expected average score would have been 11.50 (m 2 ). Therefore, the sample size for each group was calculated based on SD 1 and SD 2 of 4.58, having two sided alpha of 0.05, with the power of 90%. Sixty‐four pregnant woman were considered a sufficient sample size for this study. Considering the attrition rate, the final sample size was increased to 68 women in each group ( n = 136). 2.1 Sampling method Cluster sampling was used. Tabriz city has 20 health complexes, and each complex covered four to five health centres. One third of healthcare complexes ( n = 7) were selected via simple random method from the 20 health complexes, and sampling was performed in 29 health centres. In each centre, eligible primiparous women at 35 to 37 weeks of gestation were invited to participate in the study. The objectives and requirements of the study were explained to all potential participants. If they were willing to participate in the study, written consent was obtained and the study questionnaires were completed through interviews. 2.2 Data collection tools The following questionnaires were used to collect data: socio‐demographic and obstetrics characteristics, satisfaction with childbirth preparation classes and knowledge of pregnancy and childbirth. The socio‐demographic characteristics questionnaire included the following items: women's age, the onset of sexual intercourse; education, occupation, body mass index, spouse's age, education and occupation, spouse's support and marital satisfaction. Spouse's support and marital satisfaction were assessed with a subjective item, and responses were based on 5 point Likert scale including (very high, high, moderate, low and very low). Participants could choose one of the options. Obstetrics questionnaire included questions about abortion, infertility and history of unwanted pregnancies; and whether or not there was a preference for the sex of the foetus. The validity of this questionnaire in this study was confirmed by content and face validity. Satisfaction questionnaire for childbirth preparation classes was first developed by Lee et al. in 2009. This questionnaire included 25 questions in three subscales: (a) structure (date, time, place, length and size of classes), (b) process (the way classes were facilitated including performance of the facilitator, usefulness of the topics and teaching styles) (c) and outcome (the influence of the classes on courage for normal birth, reducing anxiety for labour, and fulfilment of the participants educational needs). Participants determined their level of satisfaction using a 5‐point Likert scale, from “not at all satisfied” (score 1) to “very satisfied” (score 5). In the study by Lee et al., the content validity of this questionnaire was 0.88, and the Cronbach alpha level was 0.89 for the whole questionnaire and 0.76, 0.88 and 0.72 for the following subscales: structural, process and outcome, respectively (Lee & Holroyd, ). This questionnaire has been evaluated in Iran (Hassanzadeh et al., ). Cronbach's alpha coefficients of the assessed constructs were between 0.83 (the class structure) and 0.92 (the class outcome), and a coefficient of 0.93 was obtained for the whole questionnaire. The Maternal Knowledge Survey Questionnaire on Pregnancy and Childbirth used in this study was designed by a researcher based on a document that is taught in childbirth preparation classes and includes 20 questions. The scores of this questionnaire ranged from zero to twenty, where each correct answer scored one point and each incorrect or "I don't know" answer, scored zero points. The validity of this questionnaire was calculated based on expert opinions and by calculating two indicators of CVR (Content Validity Ratio) and CVI (Content Validity Index). The calculated CVR and CVI values were in the range of 0.77 to 1 and 0.88 to 1, respectively. The reliability of the questionnaire was determined as 0.81 by Cronbach's alpha coefficient. 2.3 Statistical analyses After completing the questionnaires, the data were analysed using SPSS 21 software. The normality of quantitative data was assessed using Kolmogorov–Smirnov test. Independent t tests, chi‐square, chi‐square by trend and Fisher's exact tests were used to assess the homogeneity of the study groups. Independent t test was used to compare knowledge score and women's satisfaction among study groups, and general linear model (GLM) was used to adjust the socio‐demographic characteristics of participants as potential confounding variables. P‐values of less than 0.05 were considered significant. Sampling method Cluster sampling was used. Tabriz city has 20 health complexes, and each complex covered four to five health centres. One third of healthcare complexes ( n = 7) were selected via simple random method from the 20 health complexes, and sampling was performed in 29 health centres. In each centre, eligible primiparous women at 35 to 37 weeks of gestation were invited to participate in the study. The objectives and requirements of the study were explained to all potential participants. If they were willing to participate in the study, written consent was obtained and the study questionnaires were completed through interviews. Data collection tools The following questionnaires were used to collect data: socio‐demographic and obstetrics characteristics, satisfaction with childbirth preparation classes and knowledge of pregnancy and childbirth. The socio‐demographic characteristics questionnaire included the following items: women's age, the onset of sexual intercourse; education, occupation, body mass index, spouse's age, education and occupation, spouse's support and marital satisfaction. Spouse's support and marital satisfaction were assessed with a subjective item, and responses were based on 5 point Likert scale including (very high, high, moderate, low and very low). Participants could choose one of the options. Obstetrics questionnaire included questions about abortion, infertility and history of unwanted pregnancies; and whether or not there was a preference for the sex of the foetus. The validity of this questionnaire in this study was confirmed by content and face validity. Satisfaction questionnaire for childbirth preparation classes was first developed by Lee et al. in 2009. This questionnaire included 25 questions in three subscales: (a) structure (date, time, place, length and size of classes), (b) process (the way classes were facilitated including performance of the facilitator, usefulness of the topics and teaching styles) (c) and outcome (the influence of the classes on courage for normal birth, reducing anxiety for labour, and fulfilment of the participants educational needs). Participants determined their level of satisfaction using a 5‐point Likert scale, from “not at all satisfied” (score 1) to “very satisfied” (score 5). In the study by Lee et al., the content validity of this questionnaire was 0.88, and the Cronbach alpha level was 0.89 for the whole questionnaire and 0.76, 0.88 and 0.72 for the following subscales: structural, process and outcome, respectively (Lee & Holroyd, ). This questionnaire has been evaluated in Iran (Hassanzadeh et al., ). Cronbach's alpha coefficients of the assessed constructs were between 0.83 (the class structure) and 0.92 (the class outcome), and a coefficient of 0.93 was obtained for the whole questionnaire. The Maternal Knowledge Survey Questionnaire on Pregnancy and Childbirth used in this study was designed by a researcher based on a document that is taught in childbirth preparation classes and includes 20 questions. The scores of this questionnaire ranged from zero to twenty, where each correct answer scored one point and each incorrect or "I don't know" answer, scored zero points. The validity of this questionnaire was calculated based on expert opinions and by calculating two indicators of CVR (Content Validity Ratio) and CVI (Content Validity Index). The calculated CVR and CVI values were in the range of 0.77 to 1 and 0.88 to 1, respectively. The reliability of the questionnaire was determined as 0.81 by Cronbach's alpha coefficient. Statistical analyses After completing the questionnaires, the data were analysed using SPSS 21 software. The normality of quantitative data was assessed using Kolmogorov–Smirnov test. Independent t tests, chi‐square, chi‐square by trend and Fisher's exact tests were used to assess the homogeneity of the study groups. Independent t test was used to compare knowledge score and women's satisfaction among study groups, and general linear model (GLM) was used to adjust the socio‐demographic characteristics of participants as potential confounding variables. P‐values of less than 0.05 were considered significant. RESULTS Response rate was 100%, and there were no missing data. There were no statistically significant differences on socio‐demographic and obstetric characteristics between the groups ( p > .05). In the regularly attending group, majority of women ( n = 49, 72.1%) attended eight sessions. In the irregularly attending group, only 44 (64.7%) women participated in 3 sessions, and the remainder attended the childbirth preparation classes once or twice. The mean (standard deviation) age in the regularly attending group of women was 25.7 (4.7) and 27.0 (5.6) ( p = .163) in the irregularly attending group. The other socio‐demographic characteristics are shown in Table . The mean ( SD ) knowledge score was 13.7 (2.4) and 11.0 (4.2) in the group of regular attenders and irregular attenders, respectively. According to the independent t test, the average knowledge score among the regular attenders was significantly higher than the irregular attenders ( p < .001) (Table ). Majority of the women in the irregularly attending group did not provide a correct answer to the questions that were about pain relief or episiotomy. (Table ). The mean ( SD ) score of overall class satisfaction among women with regular attendance was 100.7 (13.6) compared to 98.5 (9.93) among women with irregular attendance ( p = .342). The mean score of satisfaction on class structure ( p = .799) and class process ( p = .266) was the same between the groups. Only the mean score of satisfaction about the class outcome was significantly higher among women with regular attendance compared to women with irregular attendance ( p = .029) (Table ). Women had higher satisfaction scores for the following subscale items if they were regular attenders: "the usefulness of the topic about breathing exercises and relaxation technique," "the effectiveness of teaching method: didactic teaching," "the ability to fulfil women's educational needs" and "the ability to reduce anxiety for the labour" (Table ). After the adjustment of socio‐demographic characteristics of participants, the general linear model results showed that the mean score of knowledge ( p < .001) was significantly higher in the regularly attending group than the irregularly attending group, but no significant difference was found between the regularly attending and irregularly attending groups in terms of total satisfaction and its subscales (Table ). DISCUSSION This is the first study to our knowledge in Iran to compare women's knowledge and satisfaction with childbirth preparation classes. The main finding of the study is that women who attended the childbirth preparation classes regularly had a higher mean score of knowledge compared to women who did not attend classes regularly. Although there was no significant difference between the mean scores of overall satisfaction between the groups, satisfaction level of women about class outcomes with fulfilling their educational needs and reducing their anxiety about labour were significantly higher among women who were regular attenders compared irregular attenders. The results of this study are in alignment with other studies (Lee & Holroyd, ; Pinar et al., ; Spinelli et al., ). For instance, the results of an observational study of 9,004 Italian women showed that childbirth preparation classes increased pregnant women's knowledge on breastfeeding, contraceptive methods and increased their satisfaction with the childbirth experience compared to women who did not attend any classes (Spinelli et al., ). Similarly, in a quasi‐experimental study with 132 primiparous women in Turkey, the results demonstrated that women who attended childbirth preparation classes had a higher level of knowledge; responded better to their labour pains; and initiated breastfeeding earlier than the control group. (Pinar et al., ). In a study by Lee and Holroyd, pregnant women who attended childbirth preparation classes stated that they learned more about labour, non‐pharmacological strategies for pain relief, and breathing and relaxing exercises (Lee & Holroyd, ). The outcome measures in the above studies were assessed between the women who attended childbirth classes and who did not attend the classes; however, in our study both groups attended the classes; however, one group attended regularly and the other group attended irregularly. There are a limited number of studies that reported pregnant women's satisfaction with childbirth preparation classes. In a mixed method study by Lee et al., women, who attended childbirth classes, were highly satisfied with their classes (Lee & Holroyd, ). In another study, 96.3% of women attending childbirth classes were satisfied with their classes and reported the classes to be very useful. The greatest satisfaction of these women were about the performance of the facilitator, usefulness of relaxation methods, breathing techniques and the overall effect of the classes (Ricchi et al., ) which are similar to the findings of our study. In our study, women with regular attendance reported that childbirth preparation classes reduced their anxiety about labour. Considering that fear and anxiety is one of the predictors of traumatic birth (Ghanbari‐ Homayi et al. ), attending childbirth classes at regular bases can improve women's childbirth experiences. The lowest level of satisfaction in our study was related to the lack of having a tour of the maternity ward. However, in Lee's study, the lowest level of satisfaction was related to having evening classes where women felt tired after work and could not pay their full attention to the class (Lee & Holroyd, ). The reason for this discrepancy could be related to the differences in the format and set up of the classes. For example, the childbirth classes in Iran are run only in the mornings without a tour, which could be the reason for the dissatisfaction of women towards the teaching method about the labour ward. Although regular attendance of childbirth preparation classes are very important, perhaps women can also benefit by having an option between eight short sessions or two long sessions where they choose the option that suits them most. In the last decade, providing technology based education has also become very popular (Meedya et al., ; Meedya et al., ) and can be considered an alternative for face‐to‐face education during the COVID 19 pandemic. However, many women in Iran who use public health services do not have access to a mobile phone or high speed internet which makes it less practical for the women who require the education most. Quality of care in health services can be improved and flourished when women feel empowered, knowledgeable and satisfied through the care they receive (Kamil Dhahi et al., ; Meedya et al., ). Childbirth preparation classes have been valued by women as a positive process in preparing them for childbirth. Increasing the knowledge of pregnant women about labour, childbirth and strategies for coping with pain can increase women's self‐efficacy in enduring labour pain (Duncan et al., ; Pinar et al., ). To be able to facilitate full content of the childbirth classes, there is a need for midwives or maternity nurses (Lee & Holroyd, ; Ricchi et al., ) who have the advanced knowledge, required skills and clinical experience in providing care during antenatal, intrapartum and postpartum periods (Lee & Holroyd, ). 4.1 Limitations and strengths Although the random sampling is a strength in this study, there are some limitations. The main limitation of this study is that the participants consisted of primiparous women from healthcare complexes from Tabriz city only. Therefore, the results cannot be generalized to multiparous women or primiparous pregnant women from other cities. Also, another limitation of this study was its cross‐sectional design in which the relationships shown do not exactly indicate a causal relationship. 4.2 Implications of findings Based on the findings of this study, nurses and midwives can play an important role in providing prenatal education for pregnant women and in encouraging them to attend these classes. They as healthcare providers can provide childbirth education along with prenatal care for pregnant women. They can improve pregnant women's knowledge of pregnancy and childbirth by teaching them about physiological changes of pregnancy, foetal development, labour and childbirth, warning signs, dealing with common problems during pregnancy, non‐pharmacological methods for pain relief in labour. By these trainings, prenatal nurses can reduce pregnant women's fear and anxiety and increase their self‐confidence in enduring labour pains. They answer to pregnant women questions respectfully and guide them to seek professional. Also prenatal nurses can prepare mothers to breastfeed and provide care for their infants with their training. Thus, midwives and nurses can play an important role in promoting vaginal childbirth and also healthy lifestyle for mothers during pregnancy and postpartum periods. Limitations and strengths Although the random sampling is a strength in this study, there are some limitations. The main limitation of this study is that the participants consisted of primiparous women from healthcare complexes from Tabriz city only. Therefore, the results cannot be generalized to multiparous women or primiparous pregnant women from other cities. Also, another limitation of this study was its cross‐sectional design in which the relationships shown do not exactly indicate a causal relationship. Implications of findings Based on the findings of this study, nurses and midwives can play an important role in providing prenatal education for pregnant women and in encouraging them to attend these classes. They as healthcare providers can provide childbirth education along with prenatal care for pregnant women. They can improve pregnant women's knowledge of pregnancy and childbirth by teaching them about physiological changes of pregnancy, foetal development, labour and childbirth, warning signs, dealing with common problems during pregnancy, non‐pharmacological methods for pain relief in labour. By these trainings, prenatal nurses can reduce pregnant women's fear and anxiety and increase their self‐confidence in enduring labour pains. They answer to pregnant women questions respectfully and guide them to seek professional. Also prenatal nurses can prepare mothers to breastfeed and provide care for their infants with their training. Thus, midwives and nurses can play an important role in promoting vaginal childbirth and also healthy lifestyle for mothers during pregnancy and postpartum periods. CONCLUSION Regular attendance at childbirth preparation classes is associated with higher women's knowledge which can assist primiparous women to meet their educational needs, reduce maternal anxiety about childbirth and give the courage for natural labour and birth. Therefore, the integration of childbirth preparation classes is recommended to be part of routine prenatal care where all pregnant women can be encouraged to attend these classes. None declared.
Biodegradation of phenol-contaminated soil and plant growth promotion by
c4d1e416-9bbd-4ee0-9b2e-b1bcde079aaf
11302282
Microbiology[mh]
Phenol is a crucial organic chemical and is commonly used in the production of resins, fungicides, preservatives, and pharmaceuticals . It is essential for manufacturing synthetic fibers, rubbers, and dyes . As a toxic organic compound, the mass production of phenol and its derivatives brings the inevitable harm to the natural ecosystem and human health. Phenol relics can affect the soil’s organic matter, microorganisms, and enzyme activity, leading to soil permeability reduction, moisture and nutrient loss, and fertility decline. In parallel, phenol enters the crops through root absorption and foliar deposition, disturbing the growth and development of plants and reducing the crop yield and quality. Notably, phenol can be transmitted through the food chain to enter humans and livestock, causing damage to the nervous system, liver, kidney, and other organs, which even results in genetic mutations and cancer . The mobility of phenol is limited compared with it in soil due to sorbing to soil particles; thus, its removal in terrestrial environments can be challenging . The physical and chemical methods for soil remediation that is contaminated with phenol primarily include thermal treatment, leaching, electrokinetics, and chemical oxidation . These methods can effectively reduce phenol concentration in the soil but with obvious drawbacks, such as high cost, limited applicability, and significant impacts on soil nutrients and fertility . Microbial remediation strategy uses indigenous or exogenous induced microbial populations within the contaminated soil to reduce the content of soil organic pollutants . Specifically, phenol-degrading microorganisms utilize oxygen to break down organic contaminants into smaller and less toxic compounds . Some anaerobic bacteria and archaea use alternative electron acceptors, for instance, nitrate, sulfate, or carbon dioxide, to degrade the organic pollutants, causing the production of simpler end products, including methane or carbon dioxide . Phenol-degrading bacteria can be isolated from soil, water, and sediments, and they possess the ability to utilize phenol as a carbon source, breaking it down into less-toxic particles under a wide range of environmental conditions. Up to date, a series of phenol-degrading bacteria have been confirmed to be effective for bioremediation, such as Pseudomonas , Achromobacter , Rhodococcus , Acinetobacter , Bacillus , and Ralstonia eutropha . Recent studies indicate that plants possess the capacity to tolerate and eliminate environmental pollutants by phytoremediation . However, the effectiveness of phytoremediation for volatile organic pollutants (e.g., toluene and phenol) is compromised due to the limited capacity of plants to degrade such pollutants and the potential phytotoxicity or volatilization of these chemicals through leaves . The symbiotic relationship between plants and microorganisms has been observed to accelerate the degradation of organic pollutants in soil . This phenomenon is facilitated by the exchange of root exudates and decomposition products, which provide nutrients to microorganisms, as microbial activity promotes the release of root exudates . Despite of these insights, limited research investigated the impact of soil antioxidant enzyme activity and plant responses during the degradation of soil-bound phenol under the combined influence of microorganisms and plants. A key challenge in phytoremediation is the tendency of high concentrations of pollutants to hinder plant growth, often through oxidative stress, which subsequently decreases the rate of in-situ phytoremediation . This process generates a significant amount of reactive oxygen species (ROS) to interact with the nutrient cycling, including superoxide (O 2 •−), hydroxyl radicals (OH∙), hydrogen peroxide (H 2 O 2 ), and singlet oxygen . In response, plants have evolved diverse protective and repair mechanisms to mitigate oxidative damage, employing both enzymatic and non-enzymatic antioxidant systems . The array of enzymatic antioxidants comprises peroxidase (POD), superoxide dismutase (SOD), ascorbate peroxidase (APX), catalase (CAT), and glutathione reductase, while the well-known non-enzymatic antioxidants include glutathione, ascorbate, carotenoids, and tocopherols . Therefore, it is important to study how plants and microorganisms together affect soil antioxidant enzymes and the plant’s antioxidant systems when dealing with soil organic pollutants. This study aims to screen phenol-removal bacteria with growth-promoting capabilities in a vegetable field that has been exposed to prolonged phenol contamination. Pot experiments were conducted to investigate the remediation of phenol-contaminated soil using a collaborative approach involving phenol-degrading bacteria and Chinese cabbage. Furthermore, we assessed the influence of phenol-degrading bacteria on soil antioxidant enzymes and the plant’s antioxidant system. Soil sample The soil used in this study was collected from Guanduqiao village, Zhongmou City, Henan Province (34.7480°N, 114.0452°E). The area is in the mid-latitude warm temperate continental monsoon climate. Its annual average sunshine is 2,366 h, the annual average temperature is 14.2°C, and the annual average precipitation is 616 mm. The average agricultural period in the region is 309 days/year, while vegetables and fruits are mainly grown in greenhouses. The soil samples were collected from the greenhouse at a depth of 0–20 cm that have been subjected to pesticide pollution (carbofuran, triadimefon, and difenoconazole) for over 10 years. The soil type was fluvo-aquic with a light texture, and the basic physical and chemical properties of the soil were measured: pH 7.21, organic matter 10.27 g/kg, total nitrogen 1.07 g/kg, alkali hydrolyzable nitrogen 74.20 mg/kg, available phosphorus 30.68 mg/kg, available potassium 116.75 mg/kg, and phenol 56.26 mg/kg. Isolation of phenol-degrading bacteria from phenol-contaminated soil Ten gram of contaminated soil was blended with sterile water and subjected to agitation at 30°C with a rotational speed of 180 rpm for 30 minutes. The mixture was allowed to settle, and the supernatant was drawn off. Subsequently, the supernatant was appropriately diluted before being spread onto a solid Luria-Bertani (LB) medium. The culture plates were then placed in an incubator at 30°C for 24 h, adhering to the methodology established by A. G. Wollum . Colonies displayed diverse morphologies, and colors were systematically assigned identification numbers. All selected colonies were adequately maintained at 4°C for the subsequent evaluation of their phenol removal efficacy. The isolated and purified strains were inoculated in LB liquid medium and cultured in shaking plates at 30°C 180 r/min for 2 days, then centrifuged at 4,000 r/min for 15 min. The supernatant was discarded and, the bacterial pellet was collected and washed with sterilized water 2–3 times. The cell concentration was adjusted to make it OD 600 = 1 and stored as seed solution at 4°C. The 1 mL of bacterial suspension was inoculated in a 50 mL basic medium with phenol as the sole carbon source. The basic medium contained: FeSO 4 ·7H 2 O (0.018 g), K 2 HPO 4 ·3H 2 O (1.31 g), MgSO 4 ·7H 2 O (0.50 g), NaNO 3 (3.0 g), and KCl (0.50 g), in 1,000 mL double distilled water . The phenol concentration was increased gradually using 10 mg/L, 50 mg/L, and 100 mg/L. This set was based on the prior study by combining the phenol pollution status of the sampled soil . The concentration of the bacterial suspension was adjusted to an OD 600 = 1. Then, the strain was inoculated in the basic medium with 100 mg/L phenol as the sole carbon source. The medium without bacterial suspension inoculation was set as the control (three replications). All mediums were incubated at 28°C with 180 r/min for 24 h. After shaking, the content of phenol in each culture bottle was determined by the modified 4-amino antipyrine method . The medium was centrifuged at 12,000 r/min for 1 min, and then 30 µL of supernatant was added into a 10 mL test tube. Sequentially, we added 4 mL of distilled water, 40 µL of ammonia buffer solution (pH 10), 80 µL of 2% 4-aminoantipyrine, and 80 µL of 80% potassium ferricyanide solution to the test tube and mixed well. The tube stayed still for 15 min, and its absorbance value was measured at 510 nm. The absorbance data were converted to phenol concentrations using a calibration curve from 0 to 100 mg/L. The strain with the highest removal percentage was selected as the phenol-degrading bacteria in the present work. Removal percentage = (phenol content of control culture solution − phenol content of bacterial solution)/phenol content of control culture solution × 100%. The phosphorus-solubilizing and potassium-dissolving ability The 50 mL of PKO (Pikovaskaia’s inorganic phosphorus) medium was added into a 250 mL conical flask and inoculated with 1 mL of phenol-degrading bacteria at an OD 600 of 1 (three replications) . The flask was incubated for 72 h at 30°C with a rotational speed of 180 r/min. Then, we collected the culture broth and centrifuged it at 4°C for 10 min at 10,000 r/min. The concentration of phosphorus was determined by the molybdenum antimony colorimetric assay method . The 50 mL of potassium-solubilizing bacteria culture medium was added and inoculated with 1 mL of phenol-degrading bacteria at an OD 600 of 1 into a 250 mL triangular flask (three replications) . The flask was incubated for 72 h at 30°C with a rotational speed of 180 r/min. Then, we collected the culture broth and centrifuged it at 4°C for 20 min at 6,000 r/min. The supernatant was collected, and the potassium content was measured using the flame photometry method . Morphological, physiological, biochemical, and molecular characterization The selected phenol-degrading bacteria were inoculated on LB agar plates and incubated at 30°C for 24 h. The morphology of colonies, including size, shape, color, shine, texture, and transparency, was observed using a microscope (SK200, Motic). The Gram-staining method was employed for strain identification . The selected phenol-degrading bacteria were inoculated in an LB liquid medium to prepare a bacterial suspension. The bacterial cells were washed with phosphate buffer and then fixed with 2.5% glutaraldehyde. After another wash with phosphate buffer, gradient dehydration was performed using ethanol-water solutions . The morphology and size of the bacterial samples were observed by scanning electron microscope (SEM; S-3400N, Hitachi) at the Central Laboratory of Henan Agricultural University. Aerobic test, contact enzyme determination, starch hydrolysis test, methyl red test, phthalein methyl methanol test (VP test), gelatin hydrolysis test, and citrate utilization test were conducted in the present study by the identification manual of common bacteria . The selected phenol-degrading bacteria were characterized through 16S rRNA sequence alignment. The 16S rRNA sequence amplification was carried out using PCR and employed universal primers: 27F (5′- AGAGTTTGATCCTGGCTCAG -3′) and 1492R (5′- GGTTACCTTGTTACGACTT -3′). The amplified samples were sequenced by Sangon Biotech Co., Ltd. (Shanghai, China), according to the method outlined by Monis et al. . The acquired sequences underwent a BLAST analysis against the NCBI (National Center for Biotechnology Information) database, and a 16S rDNA phylogenetic tree was constructed using the neighbor-joining method with a bootstrap value of 1,000 by employing MEGA 7.0 software, as described by Kumar et al. . All sequences were deposited in the GenBank repository, and the corresponding accession numbers were obtained. Growth and phenol degradation of phenol-degrading bacteria under different culture conditions The selected phenol-degrading bacteria was tested in media with different pH values (3, 4, 5, 6, 7, 8, and 9), inoculation times (12, 24, 48, 72, and 120 h), initial phenol concentrations (100, 250, 500, 1,000, 1,500, and 2,000 mg/L), inoculation amounts (1%, 5%, 10%, 15%, and 20%), liquid volumes (25, 50, 75, 100, 150, and 250 mL), and nitrogen source (peptone, yeast extract, sodium nitrate, urea, and ammonium sulfate) to evaluate the effect of potential phenol-degrading bacteria on bacteria growth (OD 600 ) and phenol degradation. Effect of phenol-degrading bacteria on plant growth in greenhouse The Chinese cabbage ( Brassica chinensis L.) variety “April slow” was sampled in this experiment. The soil used in this experiment was collected from the field containing isolated phenol-degrading bacteria. The impurities, such as roots, stones, and soil animals, were removed and sifted through 10 mesh sieve (1.70 mm), then mixed well. The amount of soil in each basin was 1 kg. The culture broth of the collected bacteria was centrifuged to remove the supernatant. The bacterial pellet of centrifugation and resuspension was repeated three times using sterile water, and the concentration was adjusted to 10 11 CFU(colony-forming unit)/mL. The pot experiment was set up with six treatments: (i) phenol-contaminated soils (P); (ii) phenol-contaminated soils + inactivated bacteria (PI); (iii) phenol-contaminated soils + active bacteria (PB); (iv) phenol-contaminated soils + vegetable (PV); (v) phenol-contaminated soils + inactivated bacteria + vegetable (PIV); (vi) phenol-contaminated soils + active bacteria + vegetable (PBV). The Chinese cabbage seeds were sterilized with H 2 O 2 and germinated on sterilized filter paper sheets in petri dishes for 3 days. Six uniform seedlings were then transplanted into each pot and transferred to a greenhouse. The greenhouse was set up as a light/dark period of 16/8 h, a relative humidity of 60% ± 5%, and a temperature of 25°C. After 1 week of emergence, three plants with uniform growth were retained in each pot. The inoculum of phenol-degrading bacteria was 10 8 CFU/g soil. Treatment of inactivated strains was inoculated with high temperature and high pressure inactivated strains. Each treatment was set up with four replicates. The potted experiment was carried out in the greenhouse and watered regularly every day to keep the soil water content at 60%–70% of the field capacity. Chinese cabbage was sampled after 35 days of strain inoculation. Measurement of soil phenol, nutrients, microbial biomass, and enzyme activity Soil samples were collected for each treatment sifted through 10 mesh sieves and mixed well. Soil phenol content was determined by Gas chromatography-mass spectrometry . The content of available phosphorus and potassium in soil samples was measured after air-drying . The microbial biomass carbon (MBC) and microbial biomass nitrogen (MBN) were determined by a chloroform fumigation-direct extraction method . Soil base respiration (SBR) was measured by a Gas chromatography system (GC-2014, Shimadzu, Kyoto, Japan) based on the linear increase in gas with time . The microbial metabolic quotient (qCO 2 = SBR/MBC) was calculated with the formula of T. Anderson and K. Domsch . Soil dehydrogenase activity (S-DHA) was determined by taking 1 g of soil and incubating it with triphenyl tetrazolium chloride for 6 h at 30°C . The determination of soil catalase activity (S-CAT) was conducted using the UV spectrophotometric method . The soil polyphenol oxidase activity (S-PPO) was determined using the pyrogallol colorimetric method . The determination of soil urease activity (S-UE) was conducted using the Nesslerization colorimetric method . Measurement of photosynthetic system, biomass, and root system architecture in Chinese cabbage The photosynthetic rate (Pn), stomatal conductance (Gs), intercellular CO 2 concentration (Ci), and transpiration rate (Tr) of the first fully expanded leaf on the top of the crown were measured by a portable photosynthesis system (Li-6400, LICOR Inc., USA) before harvest. During the measurement, the relative humidity and air temperature in the greenhouse were kept at 60% ± 5% and 25°C. Leaf temperature was maintained at 25°C, and photosynthetic photon flux density in the leaf cuvette was set at 1,000 µmol/m 2 /s. Three leaves were randomly measured from each treatment. The Chinese cabbage plants were washed with tap water to remove the soil from the roots and then stored in 70% alcohol. The main plant height, leaf length, leaf width, and development (maximum width of the plant on a horizontal projection plane) were measured with a ruler (scale: 1 mm). The fresh and dry weights (all samples were dried at 105°C for 30 min and then at 70°C until a constant weight) were weighed with balance (scale: 0.01 g). The root images were taken by using a scanner (LA1600+ scanner, Canada). The root system was divided into five categories based on root diameter (RD): I (RD 0–0.5 mm), II (RD 0.5–1.0 mm), III (RD 1.0–1.5 mm), IV (RD 1.5–2.0 mm), and V (RD > 2.0 mm). The root-related parameters, including root length, root surface area, root volume, RD, and root tips, were analyzed using WinRhizo software (WinRhizo2003b, Canada). The first fully expanded leaves on the top of the crown, including the leaf used for gas exchange measurement, were cut into small pieces, and 0.3 g samples were soaked in 25 mL 95% ethanol. Samples were placed in a dark environment at room temperature. After the color of the leaves had faded, the content of photosynthetic pigments, including chlorophyll a (Chl a) and b (Chl b), and carotenoid was measured at 470 nm, 649 nm, and 665 nm by a UV–Visible spectrophotometer. The total chlorophyll (Total Chl) content is equal to the sum of Chl a and Chl b content . Measurement of quality, resistance substances, and enzyme activity in Chinese cabbage The collected fresh Chinese cabbage samples were separated into the aboveground and underground parts. The quantification of reducing sugars was performed using the 2,4-dinitrosalicylic acid colorimetric method . Vitamin C (VC) content was quantified using the indophenol method . The determination of crude fiber content was conducted using the gravimetric method . The determination of nitrate content was carried out following Singh’s method . Malonaldehyde (MDA) content was measured by the thiobarbituric acid method reaction . The concentration of H 2 O 2 in the leaves was determined using Alexieva’s method . SOD was determined by the nitroblue tetrazolium method based on Tandy’s studies . CAT was assayed according to H. Aebi . POD was determined according to Zhang et al. . APX was measured based on the method of Y. Nakano and K. Asada . Statistics Statistical analyses were carried out using SPSS 16.0 (SPSS Inc., Chicago, IL, USA). One-way ANOVA (Analysis of Variance) with a least significant difference test ( P < 0.05) was used to determine the significance of differences . A two-way ANOVA was employed to assess the effects of bacteria, plants, and their interaction. Pearson’s correlation analysis was performed to investigate the relationships between different indicators. Before performing PCA (principal component analysis) in Metabo Analyst 5.0, all data were log-transformed. ClustVis was utilized to create PCA plots and heatmaps . All graphs were generated using Origin 2018 (OriginLab Corporation, Northampton, MA, USA). We explored the relationship between soil microbial activity, soil enzyme activities, plant root system, plant antioxidant system, and plant biomass and quality by using partial least squares path modeling (PLS-PM), a particularly useful statistical method for demonstrating cause and effect relationships among observed and latent variables . The estimates of path coefficients and the coefficients of determination ( R 2 ) in our path model were validated by R (v. 3.3.3) with the package “plspm” (1,000 bootstraps). The soil used in this study was collected from Guanduqiao village, Zhongmou City, Henan Province (34.7480°N, 114.0452°E). The area is in the mid-latitude warm temperate continental monsoon climate. Its annual average sunshine is 2,366 h, the annual average temperature is 14.2°C, and the annual average precipitation is 616 mm. The average agricultural period in the region is 309 days/year, while vegetables and fruits are mainly grown in greenhouses. The soil samples were collected from the greenhouse at a depth of 0–20 cm that have been subjected to pesticide pollution (carbofuran, triadimefon, and difenoconazole) for over 10 years. The soil type was fluvo-aquic with a light texture, and the basic physical and chemical properties of the soil were measured: pH 7.21, organic matter 10.27 g/kg, total nitrogen 1.07 g/kg, alkali hydrolyzable nitrogen 74.20 mg/kg, available phosphorus 30.68 mg/kg, available potassium 116.75 mg/kg, and phenol 56.26 mg/kg. Ten gram of contaminated soil was blended with sterile water and subjected to agitation at 30°C with a rotational speed of 180 rpm for 30 minutes. The mixture was allowed to settle, and the supernatant was drawn off. Subsequently, the supernatant was appropriately diluted before being spread onto a solid Luria-Bertani (LB) medium. The culture plates were then placed in an incubator at 30°C for 24 h, adhering to the methodology established by A. G. Wollum . Colonies displayed diverse morphologies, and colors were systematically assigned identification numbers. All selected colonies were adequately maintained at 4°C for the subsequent evaluation of their phenol removal efficacy. The isolated and purified strains were inoculated in LB liquid medium and cultured in shaking plates at 30°C 180 r/min for 2 days, then centrifuged at 4,000 r/min for 15 min. The supernatant was discarded and, the bacterial pellet was collected and washed with sterilized water 2–3 times. The cell concentration was adjusted to make it OD 600 = 1 and stored as seed solution at 4°C. The 1 mL of bacterial suspension was inoculated in a 50 mL basic medium with phenol as the sole carbon source. The basic medium contained: FeSO 4 ·7H 2 O (0.018 g), K 2 HPO 4 ·3H 2 O (1.31 g), MgSO 4 ·7H 2 O (0.50 g), NaNO 3 (3.0 g), and KCl (0.50 g), in 1,000 mL double distilled water . The phenol concentration was increased gradually using 10 mg/L, 50 mg/L, and 100 mg/L. This set was based on the prior study by combining the phenol pollution status of the sampled soil . The concentration of the bacterial suspension was adjusted to an OD 600 = 1. Then, the strain was inoculated in the basic medium with 100 mg/L phenol as the sole carbon source. The medium without bacterial suspension inoculation was set as the control (three replications). All mediums were incubated at 28°C with 180 r/min for 24 h. After shaking, the content of phenol in each culture bottle was determined by the modified 4-amino antipyrine method . The medium was centrifuged at 12,000 r/min for 1 min, and then 30 µL of supernatant was added into a 10 mL test tube. Sequentially, we added 4 mL of distilled water, 40 µL of ammonia buffer solution (pH 10), 80 µL of 2% 4-aminoantipyrine, and 80 µL of 80% potassium ferricyanide solution to the test tube and mixed well. The tube stayed still for 15 min, and its absorbance value was measured at 510 nm. The absorbance data were converted to phenol concentrations using a calibration curve from 0 to 100 mg/L. The strain with the highest removal percentage was selected as the phenol-degrading bacteria in the present work. Removal percentage = (phenol content of control culture solution − phenol content of bacterial solution)/phenol content of control culture solution × 100%. The 50 mL of PKO (Pikovaskaia’s inorganic phosphorus) medium was added into a 250 mL conical flask and inoculated with 1 mL of phenol-degrading bacteria at an OD 600 of 1 (three replications) . The flask was incubated for 72 h at 30°C with a rotational speed of 180 r/min. Then, we collected the culture broth and centrifuged it at 4°C for 10 min at 10,000 r/min. The concentration of phosphorus was determined by the molybdenum antimony colorimetric assay method . The 50 mL of potassium-solubilizing bacteria culture medium was added and inoculated with 1 mL of phenol-degrading bacteria at an OD 600 of 1 into a 250 mL triangular flask (three replications) . The flask was incubated for 72 h at 30°C with a rotational speed of 180 r/min. Then, we collected the culture broth and centrifuged it at 4°C for 20 min at 6,000 r/min. The supernatant was collected, and the potassium content was measured using the flame photometry method . The selected phenol-degrading bacteria were inoculated on LB agar plates and incubated at 30°C for 24 h. The morphology of colonies, including size, shape, color, shine, texture, and transparency, was observed using a microscope (SK200, Motic). The Gram-staining method was employed for strain identification . The selected phenol-degrading bacteria were inoculated in an LB liquid medium to prepare a bacterial suspension. The bacterial cells were washed with phosphate buffer and then fixed with 2.5% glutaraldehyde. After another wash with phosphate buffer, gradient dehydration was performed using ethanol-water solutions . The morphology and size of the bacterial samples were observed by scanning electron microscope (SEM; S-3400N, Hitachi) at the Central Laboratory of Henan Agricultural University. Aerobic test, contact enzyme determination, starch hydrolysis test, methyl red test, phthalein methyl methanol test (VP test), gelatin hydrolysis test, and citrate utilization test were conducted in the present study by the identification manual of common bacteria . The selected phenol-degrading bacteria were characterized through 16S rRNA sequence alignment. The 16S rRNA sequence amplification was carried out using PCR and employed universal primers: 27F (5′- AGAGTTTGATCCTGGCTCAG -3′) and 1492R (5′- GGTTACCTTGTTACGACTT -3′). The amplified samples were sequenced by Sangon Biotech Co., Ltd. (Shanghai, China), according to the method outlined by Monis et al. . The acquired sequences underwent a BLAST analysis against the NCBI (National Center for Biotechnology Information) database, and a 16S rDNA phylogenetic tree was constructed using the neighbor-joining method with a bootstrap value of 1,000 by employing MEGA 7.0 software, as described by Kumar et al. . All sequences were deposited in the GenBank repository, and the corresponding accession numbers were obtained. The selected phenol-degrading bacteria was tested in media with different pH values (3, 4, 5, 6, 7, 8, and 9), inoculation times (12, 24, 48, 72, and 120 h), initial phenol concentrations (100, 250, 500, 1,000, 1,500, and 2,000 mg/L), inoculation amounts (1%, 5%, 10%, 15%, and 20%), liquid volumes (25, 50, 75, 100, 150, and 250 mL), and nitrogen source (peptone, yeast extract, sodium nitrate, urea, and ammonium sulfate) to evaluate the effect of potential phenol-degrading bacteria on bacteria growth (OD 600 ) and phenol degradation. The Chinese cabbage ( Brassica chinensis L.) variety “April slow” was sampled in this experiment. The soil used in this experiment was collected from the field containing isolated phenol-degrading bacteria. The impurities, such as roots, stones, and soil animals, were removed and sifted through 10 mesh sieve (1.70 mm), then mixed well. The amount of soil in each basin was 1 kg. The culture broth of the collected bacteria was centrifuged to remove the supernatant. The bacterial pellet of centrifugation and resuspension was repeated three times using sterile water, and the concentration was adjusted to 10 11 CFU(colony-forming unit)/mL. The pot experiment was set up with six treatments: (i) phenol-contaminated soils (P); (ii) phenol-contaminated soils + inactivated bacteria (PI); (iii) phenol-contaminated soils + active bacteria (PB); (iv) phenol-contaminated soils + vegetable (PV); (v) phenol-contaminated soils + inactivated bacteria + vegetable (PIV); (vi) phenol-contaminated soils + active bacteria + vegetable (PBV). The Chinese cabbage seeds were sterilized with H 2 O 2 and germinated on sterilized filter paper sheets in petri dishes for 3 days. Six uniform seedlings were then transplanted into each pot and transferred to a greenhouse. The greenhouse was set up as a light/dark period of 16/8 h, a relative humidity of 60% ± 5%, and a temperature of 25°C. After 1 week of emergence, three plants with uniform growth were retained in each pot. The inoculum of phenol-degrading bacteria was 10 8 CFU/g soil. Treatment of inactivated strains was inoculated with high temperature and high pressure inactivated strains. Each treatment was set up with four replicates. The potted experiment was carried out in the greenhouse and watered regularly every day to keep the soil water content at 60%–70% of the field capacity. Chinese cabbage was sampled after 35 days of strain inoculation. Soil samples were collected for each treatment sifted through 10 mesh sieves and mixed well. Soil phenol content was determined by Gas chromatography-mass spectrometry . The content of available phosphorus and potassium in soil samples was measured after air-drying . The microbial biomass carbon (MBC) and microbial biomass nitrogen (MBN) were determined by a chloroform fumigation-direct extraction method . Soil base respiration (SBR) was measured by a Gas chromatography system (GC-2014, Shimadzu, Kyoto, Japan) based on the linear increase in gas with time . The microbial metabolic quotient (qCO 2 = SBR/MBC) was calculated with the formula of T. Anderson and K. Domsch . Soil dehydrogenase activity (S-DHA) was determined by taking 1 g of soil and incubating it with triphenyl tetrazolium chloride for 6 h at 30°C . The determination of soil catalase activity (S-CAT) was conducted using the UV spectrophotometric method . The soil polyphenol oxidase activity (S-PPO) was determined using the pyrogallol colorimetric method . The determination of soil urease activity (S-UE) was conducted using the Nesslerization colorimetric method . The photosynthetic rate (Pn), stomatal conductance (Gs), intercellular CO 2 concentration (Ci), and transpiration rate (Tr) of the first fully expanded leaf on the top of the crown were measured by a portable photosynthesis system (Li-6400, LICOR Inc., USA) before harvest. During the measurement, the relative humidity and air temperature in the greenhouse were kept at 60% ± 5% and 25°C. Leaf temperature was maintained at 25°C, and photosynthetic photon flux density in the leaf cuvette was set at 1,000 µmol/m 2 /s. Three leaves were randomly measured from each treatment. The Chinese cabbage plants were washed with tap water to remove the soil from the roots and then stored in 70% alcohol. The main plant height, leaf length, leaf width, and development (maximum width of the plant on a horizontal projection plane) were measured with a ruler (scale: 1 mm). The fresh and dry weights (all samples were dried at 105°C for 30 min and then at 70°C until a constant weight) were weighed with balance (scale: 0.01 g). The root images were taken by using a scanner (LA1600+ scanner, Canada). The root system was divided into five categories based on root diameter (RD): I (RD 0–0.5 mm), II (RD 0.5–1.0 mm), III (RD 1.0–1.5 mm), IV (RD 1.5–2.0 mm), and V (RD > 2.0 mm). The root-related parameters, including root length, root surface area, root volume, RD, and root tips, were analyzed using WinRhizo software (WinRhizo2003b, Canada). The first fully expanded leaves on the top of the crown, including the leaf used for gas exchange measurement, were cut into small pieces, and 0.3 g samples were soaked in 25 mL 95% ethanol. Samples were placed in a dark environment at room temperature. After the color of the leaves had faded, the content of photosynthetic pigments, including chlorophyll a (Chl a) and b (Chl b), and carotenoid was measured at 470 nm, 649 nm, and 665 nm by a UV–Visible spectrophotometer. The total chlorophyll (Total Chl) content is equal to the sum of Chl a and Chl b content . The collected fresh Chinese cabbage samples were separated into the aboveground and underground parts. The quantification of reducing sugars was performed using the 2,4-dinitrosalicylic acid colorimetric method . Vitamin C (VC) content was quantified using the indophenol method . The determination of crude fiber content was conducted using the gravimetric method . The determination of nitrate content was carried out following Singh’s method . Malonaldehyde (MDA) content was measured by the thiobarbituric acid method reaction . The concentration of H 2 O 2 in the leaves was determined using Alexieva’s method . SOD was determined by the nitroblue tetrazolium method based on Tandy’s studies . CAT was assayed according to H. Aebi . POD was determined according to Zhang et al. . APX was measured based on the method of Y. Nakano and K. Asada . Statistical analyses were carried out using SPSS 16.0 (SPSS Inc., Chicago, IL, USA). One-way ANOVA (Analysis of Variance) with a least significant difference test ( P < 0.05) was used to determine the significance of differences . A two-way ANOVA was employed to assess the effects of bacteria, plants, and their interaction. Pearson’s correlation analysis was performed to investigate the relationships between different indicators. Before performing PCA (principal component analysis) in Metabo Analyst 5.0, all data were log-transformed. ClustVis was utilized to create PCA plots and heatmaps . All graphs were generated using Origin 2018 (OriginLab Corporation, Northampton, MA, USA). We explored the relationship between soil microbial activity, soil enzyme activities, plant root system, plant antioxidant system, and plant biomass and quality by using partial least squares path modeling (PLS-PM), a particularly useful statistical method for demonstrating cause and effect relationships among observed and latent variables . The estimates of path coefficients and the coefficients of determination ( R 2 ) in our path model were validated by R (v. 3.3.3) with the package “plspm” (1,000 bootstraps). Screening of phenol-degrading bacteria A total of 16 strains potentially capable of degrading phenol were screened and isolated. After 48 h of cultivation in an inorganic salt medium with a single phenol as a carbon source, strain H13 exhibited the highest removal capability toward 100 mg/L phenol, achieving a remarkable degradation efficiency of 97.67%. Simultaneously, the growth of strain H13 was measured to be 0.94 (OD 600 ; ). After 72 h, strain H13 was able to dissolve tricalcium phosphorus and potassium feldspar to concentrations of 325.62 mg/L and 25.99 mg/L, respectively. The surface of strain H13 was smooth with a neat, opaque, and slightly yellow edge . Gram staining showed that strain H13 was Gram-negative and irregular rod . SEM observation found that the H13 was rod-shaped form with a size of 0.56–0.57 µm × 1.3–2.94 µm . H13 showed negative for gelatin hydrolysis, V-P test, methyl red test, citrate utilization test, and hydrogen sulfide test. However, it showed positive in the contact enzyme test and starch hydrolysis test . The partial sequence of the 16S rRNA gene was a continuous stretch of 1,329 bp (OR024678). Based on the neighbor-joining methods, a phylogenetic tree was constructed and indicated that the most homologous strain of H13 was Myroides xuanwuensis TH-19 . Therefore, the strain H13 was identified and affiliated with M. xuanwuensis . The growth and phenol degradation of strain H13 under different conditions The highest removal percentage of phenol degradation, up to 98.67%, was shown by strain H13 at pH 7. The phenol degradation and OD600 value gradually increased when the pH was below 7 but decreased when the pH was above 7 . Within 24 h, strain H13 exhibited rapid growth, with OD600 reaching its maximum, and the removal percentage reaching 97.00% at 48 h. Afterward, the removal percentage remained relatively stable. The culture medium’s nutrients were consumed over time, reducing bacterial growth . There was no significant difference ( P < 0.05) in the phenol removal percentage between the initial concentrations of 100 mg/L and 250 mg/L. As the initial concentration of phenol increased, the removal percentage gradually decreased. When phenol reached 2,000 mg/L, degradation dropped to 9.00%. After the initial concentration of phenol exceeded 250 mg/L, the growth of the strain started to decline. Therefore, the optimal degradation concentration for strain H13 was between 100 mg/L and 250 mg/L . Under the inoculation amount was 1% condition, strain H13 showed the highest removal percentage of phenol, reaching up to 96.67%. However, as the inoculation amount increased, the removal percentage decreased, although it remained above 70%. The growth of bacteria, as indicated by the OD600 value, showed a significant decrease ( P < 0.05) with the increase of inoculation amount . Compared to other treatments, a 25 mL bacterial suspension in the triangular flask indicated significantly higher phenol degradation ( P < 0.05). As the liquid volume increased and the oxygen volume in the flask decreased, the phenol degradation ability of strain H13 showed a downward trend. Simultaneously, the OD 600 value also demonstrated a significant decrease with the increase in liquid volume ( P < 0.05; ). Different nitrogen sources showed varying effects on the phenol degradation ability of strain H13. When yeast powder was used as the nitrogen source, strain H13 had the highest phenol removal percentage and the best growth. Under ammonium sulfate was used as the nitrogen source condition, the phenol degradation ability of strain H13 was the lowest. However, it is worth noting that the removal percentage of all nitrogen sources was still above 60% . The effect of strain H13 on phenol and nutrient content in potted soil The interaction between the inoculated strain and the cultivation of Chinese cabbage showed an impact on the soil’s phenol content (significant bacteria × plant). The soil phenol content in PB and PBV treatments decreased significantly by 89.22% and 92.63% ( P < 0.05), respectively. However, PV and PIV treatments only decreased by 40.05% and 42.12%, respectively . Compared with the phosphorus treatment, the soil-available phosphorus and potassium in PB treatments significantly increased by 79.47% and 18.40% ( P < 0.05) and in PBV treatments significantly increased by 92.81% and 24.90% ( P < 0.05; ). While PI, PV, and PIV treatments did not increase significantly. Therefore, inoculating soil with the active strain H13, along with the combined action of H13 and native microorganisms, can notably decrease the phenol levels and significantly boost the nutrient levels in the soil. The effect of strain H13 on soil microbial biomass and soil enzyme activity The PB and PBV treatments with active inoculants had higher MBC and MBN levels than the others . A similar trend was observed in SBR and qCO 2 . There were significant variations in the activities of S-DHA, S-CAT, S-PPO, and S-UE among the six treatments ( P < 0.05). The PB treatment displayed significantly higher activity compared to the P, PI, PV, and PIV treatments, while the PBV treatment exhibited significantly higher activity than all the other treatments ( P < 0.05; ). The effect of strain H13 on the growth of Chinese cabbage No significant differences were observed in the growth of Chinese cabbage between the PIV and PV treatments, in which the strains were inoculated with inactivated bacteria. A significant difference ( P < 0.05) was found between the growth parameters from treatments PVB and PV. The plant height, leaf length, leaf width, fresh weight, dry weight, and leaf area from PVB treatment showed a 54.82%, 46.08%, 43.43%, 155.21%, 158.33%, and 59.45% increase, respectively . The plant root length, root surface area, root average diameter, root tips, and root forks from PVB treatment were 60.61%, 73.26%, 88.20%, 33.62%, and 72.71% increased as well . These findings indicate that the PBV treatment primarily influenced the root morphology within the diameter range of 0–1.5 mm. Root length (RD 0–1.5 mm), root surface area (RD 0–1.5 mm), and root volume (RD 0–1.5 mm) increased by 60.01%, 64.33%, and 63.11%, respectively . The effect of strain H13 on photosynthesis, quality, resistance substances, and resistance enzyme activity of Chinese cabbage We found that the bacteria inoculation could bring a significant increase in the content of Chl a, Chl b, carotenoids, and total Chl in Chinese cabbage ( P < 0.05; ). Furthermore, parameters related to photosynthesis, such as Pn, Tr, and Gs, were significantly increased, while Ci showed a significant decrease ( P < 0.05; ). Compared to the PV treatment, it was found that the PBV treatment showed a significant increase in the content of reducing sugar and Vc by 17.95% and 46.93% ( P < 0.05; ), respectively, while exhibiting a significant decrease in the content of crude fiber and nitrate by 27.00% and 23.52%, respectively ( P < 0.05, ). In addition, the inoculation with live bacteria reduced the levels of MDA and H 2 O 2 in both the aboveground and underground parts of Chinese cabbage ( P < 0.05; ). The activities of antioxidant enzymes, including SOD, POD, CAT, and APX, were significantly increased in both the aboveground and underground parts, showing an increase ranging from 33.07% to 95.32% ( P < 0.05, ). Principal component analysis, heatmap analysis, correlation matrix, and PLS-PM analysis PCA was applied to depict the impact of inoculating bacterial strains on Potted Chinese cabbage. The indexes of soil and plant for the potted Chinese cabbage were well depicted by PC1 and PC2, which explained 85.9% and 7.4% of the total variation. PBV treatment showed significant separation from PV and PIV treatments ( P < 0.05; ). Phenol concentration, Ci, and leaf H 2 O 2 performed the most prominent positive influence on PC1, while S-UE, S-DHA, and leaf fresh weight had the greatest impact on PC1. Root tips, leaf fresh weight, and leaf dry weight brought the most substantial positive effect on PC2, whereas root SA IV, root V IV, and root L V showed the largest negative impact on PC2 . The soil phenol content showed a significant negative correlation with the activities of soil dehydrogenase, peroxidase, polyphenol oxidase, and urease ( P < 0.05). Plant height, leaf length, leaf width, leaf fresh weight, leaf dry weight, root length, root surface area, and root volume were negatively correlated with phenol concentration, leaf MDA, root MDA, leaf H 2 O 2 , root H 2 O 2 , crude fiber, nitrate content, and Ci and positively correlated MBC, MBN, SBR, qCO 2 , S-DHA, S-CAT, S-PPO, S-UE, available phosphorus, available potassium, reducing sugar, Vc, Chl a, Chl b, carotenoids, total Chl, Pn, Tr, Gs, leaf SOD, leaf POD, leaf APX, and leaf CAT . In this work, a hypothetical model was established to quantitatively assess the relative contributions of various pathways associated with these factors . Within the context of PLS-PM, the value associated with the arrow originating from latent results (depicted as rounded rectangles) to examined results (represented as circles) corresponds to the loading of the measured variable. The path coefficient (β) indicated on the arrow linking circles represents the overall effect of the latent variables. Positive values signify a positive impact on the outcome, while negative values indicate the opposite. The numerical value’s magnitude reflects the strength of the effect. Notably, the soil microbial biomass has a substantial influence on soil enzyme activity content (β = 0.99), as well as on the impact of soil enzyme activity on soil phenol content (β = −0.99). The introduction of H13 inoculation leads to a significant reduction ( P < 0.05) in soil phenol content. Additionally, the soil microbial biomass exhibits a pronounced influence on soil nutrient content (β = 0.98), with nutrient levels exerting a greater effect on root systems (β = 0.66) in comparison to the impact of soil phenol on root systems via the subterranean antioxidant system (β = −0.34). Our model underscores that the growth and quality of Chinese cabbage are predominantly shaped by the intricate interplay of the photosynthetic apparatus, the antioxidant shield in aerial parts, and the subterranean root architecture. Within this framework, the root system emerges as the primary influencing factor, exerting a substantial impact on both the growth (β = 0.92) and quality (β = 1.84) of Chinese cabbage. While the influence of the aboveground antioxidant system on growth does not attain statistical significance ( P > 0.05), it does exhibit a notable negative influence on the quality of Chinese cabbage (β = −1.09). A total of 16 strains potentially capable of degrading phenol were screened and isolated. After 48 h of cultivation in an inorganic salt medium with a single phenol as a carbon source, strain H13 exhibited the highest removal capability toward 100 mg/L phenol, achieving a remarkable degradation efficiency of 97.67%. Simultaneously, the growth of strain H13 was measured to be 0.94 (OD 600 ; ). After 72 h, strain H13 was able to dissolve tricalcium phosphorus and potassium feldspar to concentrations of 325.62 mg/L and 25.99 mg/L, respectively. The surface of strain H13 was smooth with a neat, opaque, and slightly yellow edge . Gram staining showed that strain H13 was Gram-negative and irregular rod . SEM observation found that the H13 was rod-shaped form with a size of 0.56–0.57 µm × 1.3–2.94 µm . H13 showed negative for gelatin hydrolysis, V-P test, methyl red test, citrate utilization test, and hydrogen sulfide test. However, it showed positive in the contact enzyme test and starch hydrolysis test . The partial sequence of the 16S rRNA gene was a continuous stretch of 1,329 bp (OR024678). Based on the neighbor-joining methods, a phylogenetic tree was constructed and indicated that the most homologous strain of H13 was Myroides xuanwuensis TH-19 . Therefore, the strain H13 was identified and affiliated with M. xuanwuensis . The highest removal percentage of phenol degradation, up to 98.67%, was shown by strain H13 at pH 7. The phenol degradation and OD600 value gradually increased when the pH was below 7 but decreased when the pH was above 7 . Within 24 h, strain H13 exhibited rapid growth, with OD600 reaching its maximum, and the removal percentage reaching 97.00% at 48 h. Afterward, the removal percentage remained relatively stable. The culture medium’s nutrients were consumed over time, reducing bacterial growth . There was no significant difference ( P < 0.05) in the phenol removal percentage between the initial concentrations of 100 mg/L and 250 mg/L. As the initial concentration of phenol increased, the removal percentage gradually decreased. When phenol reached 2,000 mg/L, degradation dropped to 9.00%. After the initial concentration of phenol exceeded 250 mg/L, the growth of the strain started to decline. Therefore, the optimal degradation concentration for strain H13 was between 100 mg/L and 250 mg/L . Under the inoculation amount was 1% condition, strain H13 showed the highest removal percentage of phenol, reaching up to 96.67%. However, as the inoculation amount increased, the removal percentage decreased, although it remained above 70%. The growth of bacteria, as indicated by the OD600 value, showed a significant decrease ( P < 0.05) with the increase of inoculation amount . Compared to other treatments, a 25 mL bacterial suspension in the triangular flask indicated significantly higher phenol degradation ( P < 0.05). As the liquid volume increased and the oxygen volume in the flask decreased, the phenol degradation ability of strain H13 showed a downward trend. Simultaneously, the OD 600 value also demonstrated a significant decrease with the increase in liquid volume ( P < 0.05; ). Different nitrogen sources showed varying effects on the phenol degradation ability of strain H13. When yeast powder was used as the nitrogen source, strain H13 had the highest phenol removal percentage and the best growth. Under ammonium sulfate was used as the nitrogen source condition, the phenol degradation ability of strain H13 was the lowest. However, it is worth noting that the removal percentage of all nitrogen sources was still above 60% . The interaction between the inoculated strain and the cultivation of Chinese cabbage showed an impact on the soil’s phenol content (significant bacteria × plant). The soil phenol content in PB and PBV treatments decreased significantly by 89.22% and 92.63% ( P < 0.05), respectively. However, PV and PIV treatments only decreased by 40.05% and 42.12%, respectively . Compared with the phosphorus treatment, the soil-available phosphorus and potassium in PB treatments significantly increased by 79.47% and 18.40% ( P < 0.05) and in PBV treatments significantly increased by 92.81% and 24.90% ( P < 0.05; ). While PI, PV, and PIV treatments did not increase significantly. Therefore, inoculating soil with the active strain H13, along with the combined action of H13 and native microorganisms, can notably decrease the phenol levels and significantly boost the nutrient levels in the soil. The PB and PBV treatments with active inoculants had higher MBC and MBN levels than the others . A similar trend was observed in SBR and qCO 2 . There were significant variations in the activities of S-DHA, S-CAT, S-PPO, and S-UE among the six treatments ( P < 0.05). The PB treatment displayed significantly higher activity compared to the P, PI, PV, and PIV treatments, while the PBV treatment exhibited significantly higher activity than all the other treatments ( P < 0.05; ). No significant differences were observed in the growth of Chinese cabbage between the PIV and PV treatments, in which the strains were inoculated with inactivated bacteria. A significant difference ( P < 0.05) was found between the growth parameters from treatments PVB and PV. The plant height, leaf length, leaf width, fresh weight, dry weight, and leaf area from PVB treatment showed a 54.82%, 46.08%, 43.43%, 155.21%, 158.33%, and 59.45% increase, respectively . The plant root length, root surface area, root average diameter, root tips, and root forks from PVB treatment were 60.61%, 73.26%, 88.20%, 33.62%, and 72.71% increased as well . These findings indicate that the PBV treatment primarily influenced the root morphology within the diameter range of 0–1.5 mm. Root length (RD 0–1.5 mm), root surface area (RD 0–1.5 mm), and root volume (RD 0–1.5 mm) increased by 60.01%, 64.33%, and 63.11%, respectively . We found that the bacteria inoculation could bring a significant increase in the content of Chl a, Chl b, carotenoids, and total Chl in Chinese cabbage ( P < 0.05; ). Furthermore, parameters related to photosynthesis, such as Pn, Tr, and Gs, were significantly increased, while Ci showed a significant decrease ( P < 0.05; ). Compared to the PV treatment, it was found that the PBV treatment showed a significant increase in the content of reducing sugar and Vc by 17.95% and 46.93% ( P < 0.05; ), respectively, while exhibiting a significant decrease in the content of crude fiber and nitrate by 27.00% and 23.52%, respectively ( P < 0.05, ). In addition, the inoculation with live bacteria reduced the levels of MDA and H 2 O 2 in both the aboveground and underground parts of Chinese cabbage ( P < 0.05; ). The activities of antioxidant enzymes, including SOD, POD, CAT, and APX, were significantly increased in both the aboveground and underground parts, showing an increase ranging from 33.07% to 95.32% ( P < 0.05, ). PCA was applied to depict the impact of inoculating bacterial strains on Potted Chinese cabbage. The indexes of soil and plant for the potted Chinese cabbage were well depicted by PC1 and PC2, which explained 85.9% and 7.4% of the total variation. PBV treatment showed significant separation from PV and PIV treatments ( P < 0.05; ). Phenol concentration, Ci, and leaf H 2 O 2 performed the most prominent positive influence on PC1, while S-UE, S-DHA, and leaf fresh weight had the greatest impact on PC1. Root tips, leaf fresh weight, and leaf dry weight brought the most substantial positive effect on PC2, whereas root SA IV, root V IV, and root L V showed the largest negative impact on PC2 . The soil phenol content showed a significant negative correlation with the activities of soil dehydrogenase, peroxidase, polyphenol oxidase, and urease ( P < 0.05). Plant height, leaf length, leaf width, leaf fresh weight, leaf dry weight, root length, root surface area, and root volume were negatively correlated with phenol concentration, leaf MDA, root MDA, leaf H 2 O 2 , root H 2 O 2 , crude fiber, nitrate content, and Ci and positively correlated MBC, MBN, SBR, qCO 2 , S-DHA, S-CAT, S-PPO, S-UE, available phosphorus, available potassium, reducing sugar, Vc, Chl a, Chl b, carotenoids, total Chl, Pn, Tr, Gs, leaf SOD, leaf POD, leaf APX, and leaf CAT . In this work, a hypothetical model was established to quantitatively assess the relative contributions of various pathways associated with these factors . Within the context of PLS-PM, the value associated with the arrow originating from latent results (depicted as rounded rectangles) to examined results (represented as circles) corresponds to the loading of the measured variable. The path coefficient (β) indicated on the arrow linking circles represents the overall effect of the latent variables. Positive values signify a positive impact on the outcome, while negative values indicate the opposite. The numerical value’s magnitude reflects the strength of the effect. Notably, the soil microbial biomass has a substantial influence on soil enzyme activity content (β = 0.99), as well as on the impact of soil enzyme activity on soil phenol content (β = −0.99). The introduction of H13 inoculation leads to a significant reduction ( P < 0.05) in soil phenol content. Additionally, the soil microbial biomass exhibits a pronounced influence on soil nutrient content (β = 0.98), with nutrient levels exerting a greater effect on root systems (β = 0.66) in comparison to the impact of soil phenol on root systems via the subterranean antioxidant system (β = −0.34). Our model underscores that the growth and quality of Chinese cabbage are predominantly shaped by the intricate interplay of the photosynthetic apparatus, the antioxidant shield in aerial parts, and the subterranean root architecture. Within this framework, the root system emerges as the primary influencing factor, exerting a substantial impact on both the growth (β = 0.92) and quality (β = 1.84) of Chinese cabbage. While the influence of the aboveground antioxidant system on growth does not attain statistical significance ( P > 0.05), it does exhibit a notable negative influence on the quality of Chinese cabbage (β = −1.09). In this study, a phenol-degrading bacteria M. xuanwuensis H13 was screened from a vegetable field that had been chronically contaminated with phenol-contained pesticides. The extensive use of pesticides brings worsening soil contamination by the intermediate compound phenol . M. xuanwuensis H13 was isolated for the first time from soil samples in Jiangsu, China, which has been reported to be able to weather biotite (a silicate mineral) and release Si, Al, and Fe from the mineral . The ability of strain H13 to degrade phenolic pollutants was measured and reported in the current study. We observed a significant decrease in soil phenol content after H13 inoculation; moreover, H13 significantly increased phosphorus solubilization and potassium release, which increases the soil’s availability of these nutrients. Notably, H13 can promote the growth of Chinese cabbage on both aboveground and underground parts by being involved in the photosynthetic metabolism of plant roots. Under the condition with increased volume of medium and reduced oxygen content, the phenol degradation capacity and growth ability of strain H13 performed decreased gradually due to its aerobic nature . The degradation of phenol by most phenol-degrading bacteria occurs under neutral conditions, and the availability of bacteria that can grow and maintain phenol degradation capability under extremely acidic conditions is limited . H13 showed a high degradation efficiency for 100 mg/L phenol within the external pH range of 5–9 after 24 h, with removal percentage reaching up to 80%. Furthermore, H13 maintained growth and strong ability of phenol degradation at pH 3 (removal percentage of 44%). Rhodotorula sp. ZM1 strain, which was screened from acidic mine drainage, showed the ability to degrade 1,100 mg/L of phenol in 120 h, achieving a removal percentage of 0.074 g/(g CDW h) cell dry weight at a pH of 3 . These findings indicate the feasibility of H13 to survive and remediate phenol contamination under extremely acidic conditions. In the presence of five different nitrogen sources, H13 showed a removal percentage of over 64% for phenol under sole carbon source with phenol, which suggests that H13 can degrade phenol under various environmental conditions . The convergence of phytoremediation and microbial bioremediation strategies can build a more efficient approach to the remediation of contaminants, particularly organic compounds . Different effects were observed in the soil due to the application of phenol-degrading bacteria (PB), Chinese cabbage (PV), and the co-inoculation of Chinese cabbage with the degrading bacteria (PBV). The soil phenol content decreased by 40% in the Chinese cabbage cultivation (PV) compared with the control treatment (P). Cultivating Chinese cabbage alone to a certain extent can contribute to the reduction of phenol content in the soil. However, the phytotoxicity of pollutants may inhibit plant growth, reducing the efficiency of bioremediation . Using phenol-degrading bacteria (PB) and co-inoculation of Chinese cabbage with PBV reduced soil phenol content by 89% in the PB treatment and 93% in the PBV treatment. This result confirmed that H13 played a predominant role throughout the process of phenol degradation, which might be due to the even distribution of the degrading bacteria caused by the small soil volume in the potting environment . S-DHA, S-CAT, S-PPO, and S-UE are important indicators of soil microbial activity, soil redox status, the degradation process of soil organic pollutants, and the stability and health of soil ecosystems . We found that the levels of S-DHA, S-CAT, S-PPO, and S-UE significantly increased after inoculation with H13 ( P < 0.05). Simultaneously, there was a significant negative correlation between the enzyme levels and soil phenol content ( P < 0.05; ). Xun et al. found that the combined effect of rhizosphere growth-promoting bacteria with arbuscular mycorrhizal fungi and oats effectively remediated soil petroleum pollution. This approach led to a significant increase in the activities of urease, dehydrogenase, and sucrase in the soil. In the remediation study conducted by Raimondo et al. , the use of sugarcane filter cake to enhance the Streptomyces strains for the remediation of lindane-contaminated soil resulted in a significant increase in the activities of soil dehydrogenase, H 2 O 2 , and urease enzymes. These findings indicate that the inoculation of remediation microorganisms can enhance the activity of relevant enzymes in the soil, thereby promoting the degradation of organic pollutants . Furthermore, in the presence of both Chinese cabbage and the bacterial strains, the enzyme levels in the soil further increased, which could be closely related to the growth of plant roots in the soil and their interaction with plant growth-promoting bacteria . The enhancement of root system and strain biological activities stimulates the secretion of hormones, enzymes, and other compounds . Our findings confirm that H13 works both independently and incorporate with plant root systems within phenol-contaminated soil. This interaction serves to enhance soil enzyme activity and concurrently reduce soil phenol content. However, it is worth noting that the degradation of phenol by a single plant species remains inefficient, and the associated potential toxicity of plants cannot be overlooked . A high capacity for phosphorus and potassium solubilization can effectively increase the availability of nutrients in the soil, thereby promoting plant growth and enhancing plant resilience to stress . Our results showed that soils with H13 inoculation contained higher available phosphorus and potassium (77.74% and 20.19%). The increase in nutrient content in the soil promotes the absorption of nutrients by plant roots, thus enhancing plant growth . We observed significant changes in the root system architecture of Chinese cabbage upon inoculation with H13, including an increase in the abundance of roots with a diameter ranging from 0 to 1.5 mm, as well as an increase in the number of root tips and forks . These findings indicate that Chinese cabbage, under the conditions of inoculating H13, showed an enhanced development of root systems. The progression toward larger and more intricate root systems equips plants with an expanded surface area, fostering enhanced interactions with the soil. Under the conditions of inoculating H13, the contents of SOD and POD in the aboveground tissues of Chinese cabbage exhibited a significant increase, as well as CAT and APX ( P < 0.05). The significant increase in enzyme activity in this regard contributes to the removal of ROS within plant tissues . We promisingly found that the levels of MDA and H 2 O 2 in both the aboveground and underground parts of the plant showed significant reductions ( P < 0.05; ). Phenolic pollutants generate ROS that cause serious oxidative damage to the lipids and proteins of living cells and tissues . MDA and H 2 O 2 are commonly used indicators for measuring the degree of oxidative stress . Antioxidant enzymes play a pivotal role in converting excessive ROS and free radicals (commonly known as radicals) within the body into substances with reduced toxicity or harmlessness . This function helps maintain a balance in the levels of ROS within the SOD and dismutates O 2 •− to H 2 O 2 and O 2 , while POD, CAT, and APX further catalyze H 2 O 2 to H 2 O . APX is also a key enzyme in the ascorbic acid-glutathione (AsA-GSH) cycle, contributing to maintaining intracellular redox balance by converting H 2 O 2 to H 2 O . H13 enhanced the levels of antioxidant enzymes in Chinese cabbage, facilitating the elimination of MDA and H 2 O 2 within the plant. This process effectively curtails the harm inflicted by oxidative stress on the Chinese cabbage. Furthermore, the enhanced antioxidant capacity and stabilization of the photosynthetic system in the Chinese cabbage plants due to the presence of H13 bacteria had a direct impact on sugar accumulation. By minimizing oxidative stress and preserving the integrity of the photosynthetic apparatus, H13 allowed the plants to more efficiently convert light energy and carbon dioxide into carbohydrate molecules during photosynthesis. This increased photosynthetic productivity led to a greater buildup of sugars, such as sucrose, glucose, and fructose, within the Chinese cabbage tissues, which have implications for the quality and nutritional value of the Chinese cabbage produce. Bacillus marsiflavi Bac 144 has been confirmed to enhance the activity of corn’s antioxidant enzymes in petroleum hydrocarbon-contaminated environment, eventually eliminating ROS in plants . The results obtained from the PLS-PM analysis revealed that substances resistant to root development (β = −0.34) appeared to have a relatively negative impact on the root system of Chinese cabbage compared to the positive effects caused by nutrient availability (β = 0.66). The elevated potassium content bolstered the plant’s resistance under unfavorable conditions, while the increased phosphorus content nurtured membrane lipids, effectively mending damaged membrane systems. These factors collectively fostered the expansion of the root system. Our results revealed a marked elevation in chlorophyll content and an augmentation in photosynthesis within Chinese cabbage under H13 inoculation. This phenomenon could be attributed to the reduction in ROS levels within the plant, thereby mitigating potential damage to chloroplasts . The fortification of photosynthesis, the detoxification of ROS via antioxidant enzymes, and the growth of roots synergistically contribute to the upsurge in aboveground biomass and the enhancement of Chinese cabbage quality. Notably, the growth of root systems casts the most profound influence on aboveground biomass and Chinese cabbage quality. Moreover, the antioxidant system within the aboveground segment significantly impacts the overall quality . Conclusions The present study isolated Myrides xuanwuensis H13, a strain carrying phenol degradation capability. It showed environmental adaptability, and its ability to solubilize phosphorus and potassium proved effective in enhancing crop growth. In a potted experiment involving Chinese cabbage, the inoculation of Myrides xuanwuensis H13 facilitated the remediation of phenol pollution by elevating the activity of soil enzymes. Notably, the results obtained from our PLS-PM analysis confirmed that the primary effect of Myrides xuanwuensis H13 inoculation was the promotion of Chinese cabbage growth and quality through the stimulation of root development, while its impact on the photosynthetic system was comparatively secondary. The findings contribute to the development of sustainable agricultural practices, the understanding of microbial-plant interactions, and the potential application of microbial agents in environmental management. However, further characterization of the H13 strain, including its biosafety profile and detailed mechanisms of action, is needed before its widespread application can be recommended. The present study isolated Myrides xuanwuensis H13, a strain carrying phenol degradation capability. It showed environmental adaptability, and its ability to solubilize phosphorus and potassium proved effective in enhancing crop growth. In a potted experiment involving Chinese cabbage, the inoculation of Myrides xuanwuensis H13 facilitated the remediation of phenol pollution by elevating the activity of soil enzymes. Notably, the results obtained from our PLS-PM analysis confirmed that the primary effect of Myrides xuanwuensis H13 inoculation was the promotion of Chinese cabbage growth and quality through the stimulation of root development, while its impact on the photosynthetic system was comparatively secondary. The findings contribute to the development of sustainable agricultural practices, the understanding of microbial-plant interactions, and the potential application of microbial agents in environmental management. However, further characterization of the H13 strain, including its biosafety profile and detailed mechanisms of action, is needed before its widespread application can be recommended.
The contextual interview: a cross-cutting patient-interviewing approach for social context
beb1bf92-b01e-4f84-b4cd-5f4f91522c56
10848999
Internal Medicine[mh]
The health of human beings is inextricably intertwined with their life context . This reality is reflected in the structural inequities that lead to social determinants of health (SDoH) and drive the prevalence of chronic disease in the United States . Roughly sixty percent of Americans have a chronic disease. Specific key risk factors, including chronic stress, tobacco use, inequitable access to food and nutrition, physical inactivity, and heavy alcohol use are often associated with chronic disease . Health behaviors directly connect to the unique social context of the patient. Chronic disease prevalence is associated with complex, interrelated factors and circumstances such as health inequities, structural oppression, mental health, mental health treatment access, poverty, financial stress and inequitable wages, environmental stressors, and access to social support and connection . A myriad of chronic diseases are also correlated with experiences of childhood trauma, including diabetes, cardiovascular disease, malignancy, lung disease, and many more . The rise of deaths of despair, which include mortality resulting from suicide, drug overdose, and alcohol-related liver disease , further illustrates the complexity of social context and its relation to morbidity and mortality. It is time to re-evaluate traditional approaches to patient interviewing within medical education and ensure that clinician-patient communication and interviewing skills have an explicit emphasis on evaluating and incorporating social context. While George Engel’s biopsychosocial model is a widely adopted framework for conceptualizing health, its explicit connections to clinician-patient communication and interviewing skills in a real-time medical encounter are less clear and cohesive across medical education curricula. The act of retrospectively analyzing and appreciating the complex factors affecting patients’ health outcomes outside of a clinical visit, while important, is a separate skill set than the granular communication skills and emotional intelligence required to elicit, respond to, and integrate this information in real time during a clinical visit. We believe that current training models for patient interviewing do not adequately develop these skills. Gathering information about a patient’s social context is a dynamic, continuous process; it is not about checking boxes or completing a one-time social history. The standard method of patient interviewing that addresses the social context has historically been described as gathering a patient’s social history . The term social history is limited and does not prompt a clinician to think about the dynamic factors that currently and meaningfully influence a patient’s interaction with the healthcare system and their own health. Behforouz, Drain, and Rhatigan highlight the need for medical training to reconceptualize the social history portion of the clinical interview and offer an expanded and more meaningful review of complex factors that comprise social context . Examples of their expanded social history include emotional health, perception of healthcare, access to and utilization of healthcare, and life circumstances. Our paper complements the insights of Behforouz et al. and provides a rationale and initial educational framework for teaching an advanced model of patient interviewing during medical training: the Contextual Interview (CI) . The CI is a cross-cutting, practical, time-conscious, and semi-structured patient interviewing approach that deliberately elicits information to improve the clinician’s sense and understanding of the patient’s unique social context . We summarize the history of extant patient interviewing models and their limitations, report results of an analysis of the Accreditation Council for Graduate Medical Education (ACGME) Milestones related to patient context, and introduce the core elements of the CI. Current educational models for patient interviewing Patient interviewing is a multidimensional activity and serves as the primary mode of ongoing cognitive and affective understanding between clinician and patient. Empirically, patient interviewing is the intentional use of specific questions directed at the patient for the collection and interpretation of pertinent health information. It can be thought of as instrumental dialogue that leads to differentials, diagnosis, and treatment planning. Patient interviewing is much more than the act of eliciting and recording data; effective patient interviewing is inherently therapeutic. It is often the connective tissue of the patient-clinician therapeutic relationship. In parallel with the sanative benefits of touch during a physical exam, patient interviewing communicates empathy, compassion, care, and respect. The patient feels attended to and connected with the clinician. Interviewing promotes engagement in recommended treatments by creating conditions for shared decision-making and talking through ambivalence. One misconception is that patient interviewing and other patient-clinician communication skills are only developed over many years of clinical practice and, consequently, cannot be easily taught to clinicians-in-training . Effective patient interviewing is not solely the purview of seasoned clinicians. This misconception serves as a barrier for adopting and disseminating more advanced interviewing techniques in undergraduate and graduate medical education. It also suggests that the science of patient interviewing education is underdeveloped and demands more investigation. Another misconception is that interviewing serves as a prelude to ‘real’ medicine, a forerunner for evidence-based treatments and procedures. However, the growing science behind empathy, compassion, and social connection suggest that interviewing is not an overture to medicine . Interviewing is medicine. The general adoption of advanced patient-interviewing techniques appears to be stagnant. Broadly speaking, there is little indication that medical providers are interviewing differently today than they did decades ago. Current variations of patient interviewing in practice likely stem from exposure to a combination of frameworks: medical interviewing; history of present illness (HPI); general patient-clinician communication; and behavioral medicine techniques . provides a comparison of patient interviewing techniques across the medical education literature. While these patient interviewing frameworks can be useful in certain circumstances and with certain patient issues, they lack a practical approach to patient interviewing. What is needed is a framework that is cross-cutting , in that the core interviewing and communication skills are applicable to a variety of patient concerns and symptoms. The learner does not have to select the right acronym for each symptom or clinical situation. The CI allows clinicians to gather meaningful patient data to understand patient circumstances, perspectives, choices, and behaviors, and to develop individualized follow-up and treatment plans anchored in their unique context. ACGME milestones, social context, & support for the contextual interview To further strengthen the justification for training physicians in the Contextual Interview (CI), a thorough review of the ACGME milestones was conducted to determine how many milestones reference an ability to assess and understand a patient’s immediate life context. The ACGME milestones represent a competency-based approach to medical education, centered on an individual learner’s progression through key milestones across several competencies to incrementally assess acquisition of the requisite knowledge, skills, and attitudes needed to independently practice medicine. The ACGME Milestones were developed by a national consensus on specialty-specific competencies that all physicians are expected to reach before graduating residency. The milestones inherently both reflect and guide education and training across both undergraduate and graduate medical education. Two of the authors (AC, MM) separately and then together reviewed milestones, based on a checklist developed by the authors, to determine the extent to which the milestones and competencies directly relate to a patient’s context . They reviewed the specialty-specific milestones of the primary care specialties (all of which included milestones applicable to inpatient and outpatient settings) including Family Medicine, Internal Medicine, and Pediatrics. Both reviewers met together, after independently reviewing each milestone, to discuss and resolve discordances in reviews. Milestones were coded as Context Related & Patient Facing, Context Related , or neither. provides a detailed description of the checklist utilized to evaluate each milestone, along with examples. Each milestone was evaluated on whether it related directly to a patient’s context (Criteria A: Context Related), such that the resident physician needs to acknowledge, assess, synthesize and/or apply patient context data to fulfill the competency. Patients’ contextual data includes internal and external factors. Internal factors can include patients’ knowledge, past experiences, memories, expectations, thoughts regarding self-efficacy/confidence, identity, frame of reference, culture, intentions, and preferences. External factors can include patients’ immediate circumstances, SDoH, exposure to adversities, active trauma or abuse, oppression/discrimination, racism, quality of interpersonal relationships, and immediate physical environments. These contextual factors directly impact healthcare delivery and the patient experience. For example, a patient with a physical disability and housing insecurity is likely to experience unique facilitators and barriers in their ability to access healthcare, understand and comprehend health-related information, make treatment decisions, establish trust and confidence in the medical system, follow through with treatment plans and recommendations, meet their health goals, engage in preventative health care, and manage their chronic health conditions. If the milestone met Criteria A (Context Related), it was then evaluated on whether it targets or implies patient-facing interviewing and communications skills (Criteria B: Patient Facing). This allows differentiating between the ability to appreciate and acknowledge social context conceptually versus developing the interviewing and communication skills to actively gather, synthesize, and apply social context data and to utilize it meaningfully to positively impact a patient’s healthcare trajectory and their subjective experience of the healthcare encounter. provides an overview of milestone data across all primary care specialties (i.e., Family Medicine, Internal Medicine, and Pediatrics) and reveals a cross discipline average of 20% of milestones meeting both Criteria A and B and coded as Context Related & Patient Facing (FM: 21%, IM: 23%, Peds: 17%), with an additional 13% (FM), 20% (IM), and 14% (Peds) of milestones meeting Criteria A only and coded as Context Related . As expected, the Patient Care and Interpersonal and Communication Skills competencies had the highest concentrations of Context Related & Patient Facing and Context Related milestones. The ability to gather, synthesize, and apply contextual data in real-time communication while interviewing a patient comprised a significant proportion of the milestones within these two competencies in each specialty (Family Medicine, ; Internal Medicine, ; Pediatrics, ). These results suggest that current training expectations in graduate medical education align with the purpose of the CI. Overview of the contextual interview Our review of the ACGME Family Medicine, Internal Medicine, and Pediatrics milestones substantiates the critical emphasis on acknowledging, assessing, synthesizing, and/or applying contextual data to improve healthcare delivery. The CI framework can assist in the delivery of patient-centered care that sensitively and adequately ensures that SDoH and key aspects of a person’s life, context, and adversities are centered in their healthcare. The CI, which includes the Love, Work, Play, & Health Behaviors (LWPHb) mnemonic, was first introduced as an interviewing approach grounded in principles of Acceptance and Commitment Therapy (ACT) for primary care and behavioral health clinicians in primary care . The authors refer to this approach as giving the clinician a ‘snapshot’ of the patient’s life in key life domains, and it provides a framework for gathering pertinent and contextual biopsychosocial information . Understanding a person’s life and social context is a foundational concept in longstanding theories and approaches to human behavior including Functional Contextualism , Relational Frame Theory , and Acceptance and Commitment Therapy . A core principle of Functional Contextualism is that human behavior can only be understood in relation to the context in which the behavior occurs . Connecting this to health behavior routinely seen in primary care, clinicians addressing smoking cessation cannot adequately help these patients without understanding the context in which their smoking occurs. For example, consider a patient recently divorced from a spouse who was verbally and physically abusive for nearly a decade. This patient schedules a primary care visit for a physical and to be linked with behavioral health counseling. Imagine if this clinician were to skip the step of gathering social context data and instead jump right into smoking cessation counseling with this patient. Skipping this step misses understanding the critical function of smoking within this patient’s current context. For this patient, smoking is a primary tool for anxiety and stress management and is likely helping the patient cope with undiagnosed posttraumatic stress disorder. Knowing the context would lead the clinician to recognize that now actually might not be the best time to engage in a smoking cessation conversation, and that this would be better introduced after the patient is linked with a behavioral health clinician. Patients know and feel when their healthcare visit is prioritizing the needs of the clinician, healthcare system, or insurance companies, rather than their own. Healthcare visits can feel like 15 minutes of to-do items and box checking of which the topics may or may not be of primary concern to the patient. To be clear, this is absolutely a system-level, electronic medical record (EMR)- and insurance-driven reality; it is not clinician driven. Primary care clinicians are under intense pressure around documentation, EMR demands, and quality metrics . In fact, increased EMR demands among other box-checking activities are a source of burnout for practicing primary care clinicians . Primary care serves many vulnerable populations including people experiencing health inequalities, chronic medical conditions, mental health disorders, substance use disorders, and ever increasing financial and social health adversities. We need to ensure that the approach we use to learn about our patients’ social context is not focused on completing a checklist, but rather is a relationship-centered, conversational approach that allows for a genuine connection between patient and health care clinician. The CI provides a framework and incorporates a philosophy that allows the pertinent information gathering process to be conversational and to promote genuine connection. The CI is anchored in four major life domains, including the areas of Love, Work, Play, and Health Behaviors . It allows clinicians to understand they are interacting with a human being rather than a patient. The CI can aid providers in the monumental task of understanding the patient’s life context outside of the medical visit, a context that is ever changing and should greatly impact healthcare clinicians’ decisions and recommendations. The CI can help clinicians answer the question, ‘How do I need to tailor my communication and approach to health care interventions, so they are practical and applicable to this specific person’s everyday social context?’ Ultimately, this structure helps the clinician formulate interventions and treatment plans germane to, and inclusive of, the patient’s context. The method of this interview approach is meant to be conversational, yet intentional. While the clinician is gathering information regarding the patient’s context with a sense of curiosity, they are simultaneously using this information to guide their subsequent questions. The CI can be applied generally (i.e., health maintenance visit) or can connect to a specific presenting problem. Relevant information related to the CI may also be collected over multiple visits, and importantly needs to be revisited routinely over time. While the LWPHb mnemonic anchors the interview, the other essential element of the CI is actively listening for social health challenges, barriers, and disparities, guided by the ACCESS-V mnemonic , which was developed by two of the authors for clinical training purpose . The ACCESS-V mnemonic covers well-established and deeply researched social health factors such as exposures to adverse childhood events (ACEs) ; cultural considerations ; internal events related to health and wellbeing (i.e., thoughts, emotions, associations, memories, physical sensations) ; external circumstances, situations, or relationships (e.g., current hardships and traumas, social support) ; exposures to SDoH and effects of structural oppression ; key information about motivation and stages of change related to health behaviors ; and finally the patient’s values and what matters most to them. All of these factors may influence a person’s ability to access healthcare, understand and comprehend health-related information, make treatment decisions, establish trust and confidence in the medical system, follow through with treatment plans and recommendations, meet their health goals, engage in preventative health care, and manage their chronic health conditions. The CI changes information-gathering from a static and disease-focused process to a dynamic and contextually focused process, which allows the patient’s biopsychosocial context to contribute to treatment decisions and iterate subsequent treatment plans based on new contextual information. Conclusions and future directions Patient interviewing needs to evolve to more accurately reflect the influence of structural, circumstantial, and environmental factors on health, illness, and engagement in healthcare. This paper encourages the adoption of a patient-interviewing framework that prioritizes appreciation of a patient’s social context, the understanding that this information is dynamic, and the utilization of contextual data to inform more patient-centered and context-sensitive treatments. Our review of the ACGME Milestones supports the need for patient-clinician interviewing and communication skills that target social context. While the CI has primarily been used by mental health clinicians working in integrated healthcare settings, we are advocating for educational research to promote the use of the CI in undergraduate and graduate medical education and to clarify core learning objectives and metrics. These objectives will then contribute to the development of training materials (e.g., case studies, workshops, observation checklists) for CI skill development within medical education. Research can then shed light on the degree to and manner by which CI training affects learner skill acquisition and competency development, as well as evaluating the response of patients who are exposed to the CI. The dissemination of CI practice may especially benefit patients whose physical and mental health-related challenges are affected by structures of oppression like racism and social forces of exclusion based on race, ethnicity, religion, sexual orientation, gender identity, etc. Educational research is needed to determine the extent to which CI training can help meet the goals of inclusive and anti-racist initiatives within medical education. The core elements of the CI emphasize eliciting patients’ unique social context, thereby acknowledging that not all individuals living in the U.S. experience the same provision of healthcare services. We must honor the complexity that undergirds our patients’ vulnerability to certain health behaviors, barriers, and outcomes. One approach is to sew the concept of knowing a patient’s social context into the fabric of one of the most basic skills of being a healthcare clinician: patient interviewing. Patient interviewing is a multidimensional activity and serves as the primary mode of ongoing cognitive and affective understanding between clinician and patient. Empirically, patient interviewing is the intentional use of specific questions directed at the patient for the collection and interpretation of pertinent health information. It can be thought of as instrumental dialogue that leads to differentials, diagnosis, and treatment planning. Patient interviewing is much more than the act of eliciting and recording data; effective patient interviewing is inherently therapeutic. It is often the connective tissue of the patient-clinician therapeutic relationship. In parallel with the sanative benefits of touch during a physical exam, patient interviewing communicates empathy, compassion, care, and respect. The patient feels attended to and connected with the clinician. Interviewing promotes engagement in recommended treatments by creating conditions for shared decision-making and talking through ambivalence. One misconception is that patient interviewing and other patient-clinician communication skills are only developed over many years of clinical practice and, consequently, cannot be easily taught to clinicians-in-training . Effective patient interviewing is not solely the purview of seasoned clinicians. This misconception serves as a barrier for adopting and disseminating more advanced interviewing techniques in undergraduate and graduate medical education. It also suggests that the science of patient interviewing education is underdeveloped and demands more investigation. Another misconception is that interviewing serves as a prelude to ‘real’ medicine, a forerunner for evidence-based treatments and procedures. However, the growing science behind empathy, compassion, and social connection suggest that interviewing is not an overture to medicine . Interviewing is medicine. The general adoption of advanced patient-interviewing techniques appears to be stagnant. Broadly speaking, there is little indication that medical providers are interviewing differently today than they did decades ago. Current variations of patient interviewing in practice likely stem from exposure to a combination of frameworks: medical interviewing; history of present illness (HPI); general patient-clinician communication; and behavioral medicine techniques . provides a comparison of patient interviewing techniques across the medical education literature. While these patient interviewing frameworks can be useful in certain circumstances and with certain patient issues, they lack a practical approach to patient interviewing. What is needed is a framework that is cross-cutting , in that the core interviewing and communication skills are applicable to a variety of patient concerns and symptoms. The learner does not have to select the right acronym for each symptom or clinical situation. The CI allows clinicians to gather meaningful patient data to understand patient circumstances, perspectives, choices, and behaviors, and to develop individualized follow-up and treatment plans anchored in their unique context. To further strengthen the justification for training physicians in the Contextual Interview (CI), a thorough review of the ACGME milestones was conducted to determine how many milestones reference an ability to assess and understand a patient’s immediate life context. The ACGME milestones represent a competency-based approach to medical education, centered on an individual learner’s progression through key milestones across several competencies to incrementally assess acquisition of the requisite knowledge, skills, and attitudes needed to independently practice medicine. The ACGME Milestones were developed by a national consensus on specialty-specific competencies that all physicians are expected to reach before graduating residency. The milestones inherently both reflect and guide education and training across both undergraduate and graduate medical education. Two of the authors (AC, MM) separately and then together reviewed milestones, based on a checklist developed by the authors, to determine the extent to which the milestones and competencies directly relate to a patient’s context . They reviewed the specialty-specific milestones of the primary care specialties (all of which included milestones applicable to inpatient and outpatient settings) including Family Medicine, Internal Medicine, and Pediatrics. Both reviewers met together, after independently reviewing each milestone, to discuss and resolve discordances in reviews. Milestones were coded as Context Related & Patient Facing, Context Related , or neither. provides a detailed description of the checklist utilized to evaluate each milestone, along with examples. Each milestone was evaluated on whether it related directly to a patient’s context (Criteria A: Context Related), such that the resident physician needs to acknowledge, assess, synthesize and/or apply patient context data to fulfill the competency. Patients’ contextual data includes internal and external factors. Internal factors can include patients’ knowledge, past experiences, memories, expectations, thoughts regarding self-efficacy/confidence, identity, frame of reference, culture, intentions, and preferences. External factors can include patients’ immediate circumstances, SDoH, exposure to adversities, active trauma or abuse, oppression/discrimination, racism, quality of interpersonal relationships, and immediate physical environments. These contextual factors directly impact healthcare delivery and the patient experience. For example, a patient with a physical disability and housing insecurity is likely to experience unique facilitators and barriers in their ability to access healthcare, understand and comprehend health-related information, make treatment decisions, establish trust and confidence in the medical system, follow through with treatment plans and recommendations, meet their health goals, engage in preventative health care, and manage their chronic health conditions. If the milestone met Criteria A (Context Related), it was then evaluated on whether it targets or implies patient-facing interviewing and communications skills (Criteria B: Patient Facing). This allows differentiating between the ability to appreciate and acknowledge social context conceptually versus developing the interviewing and communication skills to actively gather, synthesize, and apply social context data and to utilize it meaningfully to positively impact a patient’s healthcare trajectory and their subjective experience of the healthcare encounter. provides an overview of milestone data across all primary care specialties (i.e., Family Medicine, Internal Medicine, and Pediatrics) and reveals a cross discipline average of 20% of milestones meeting both Criteria A and B and coded as Context Related & Patient Facing (FM: 21%, IM: 23%, Peds: 17%), with an additional 13% (FM), 20% (IM), and 14% (Peds) of milestones meeting Criteria A only and coded as Context Related . As expected, the Patient Care and Interpersonal and Communication Skills competencies had the highest concentrations of Context Related & Patient Facing and Context Related milestones. The ability to gather, synthesize, and apply contextual data in real-time communication while interviewing a patient comprised a significant proportion of the milestones within these two competencies in each specialty (Family Medicine, ; Internal Medicine, ; Pediatrics, ). These results suggest that current training expectations in graduate medical education align with the purpose of the CI. Our review of the ACGME Family Medicine, Internal Medicine, and Pediatrics milestones substantiates the critical emphasis on acknowledging, assessing, synthesizing, and/or applying contextual data to improve healthcare delivery. The CI framework can assist in the delivery of patient-centered care that sensitively and adequately ensures that SDoH and key aspects of a person’s life, context, and adversities are centered in their healthcare. The CI, which includes the Love, Work, Play, & Health Behaviors (LWPHb) mnemonic, was first introduced as an interviewing approach grounded in principles of Acceptance and Commitment Therapy (ACT) for primary care and behavioral health clinicians in primary care . The authors refer to this approach as giving the clinician a ‘snapshot’ of the patient’s life in key life domains, and it provides a framework for gathering pertinent and contextual biopsychosocial information . Understanding a person’s life and social context is a foundational concept in longstanding theories and approaches to human behavior including Functional Contextualism , Relational Frame Theory , and Acceptance and Commitment Therapy . A core principle of Functional Contextualism is that human behavior can only be understood in relation to the context in which the behavior occurs . Connecting this to health behavior routinely seen in primary care, clinicians addressing smoking cessation cannot adequately help these patients without understanding the context in which their smoking occurs. For example, consider a patient recently divorced from a spouse who was verbally and physically abusive for nearly a decade. This patient schedules a primary care visit for a physical and to be linked with behavioral health counseling. Imagine if this clinician were to skip the step of gathering social context data and instead jump right into smoking cessation counseling with this patient. Skipping this step misses understanding the critical function of smoking within this patient’s current context. For this patient, smoking is a primary tool for anxiety and stress management and is likely helping the patient cope with undiagnosed posttraumatic stress disorder. Knowing the context would lead the clinician to recognize that now actually might not be the best time to engage in a smoking cessation conversation, and that this would be better introduced after the patient is linked with a behavioral health clinician. Patients know and feel when their healthcare visit is prioritizing the needs of the clinician, healthcare system, or insurance companies, rather than their own. Healthcare visits can feel like 15 minutes of to-do items and box checking of which the topics may or may not be of primary concern to the patient. To be clear, this is absolutely a system-level, electronic medical record (EMR)- and insurance-driven reality; it is not clinician driven. Primary care clinicians are under intense pressure around documentation, EMR demands, and quality metrics . In fact, increased EMR demands among other box-checking activities are a source of burnout for practicing primary care clinicians . Primary care serves many vulnerable populations including people experiencing health inequalities, chronic medical conditions, mental health disorders, substance use disorders, and ever increasing financial and social health adversities. We need to ensure that the approach we use to learn about our patients’ social context is not focused on completing a checklist, but rather is a relationship-centered, conversational approach that allows for a genuine connection between patient and health care clinician. The CI provides a framework and incorporates a philosophy that allows the pertinent information gathering process to be conversational and to promote genuine connection. The CI is anchored in four major life domains, including the areas of Love, Work, Play, and Health Behaviors . It allows clinicians to understand they are interacting with a human being rather than a patient. The CI can aid providers in the monumental task of understanding the patient’s life context outside of the medical visit, a context that is ever changing and should greatly impact healthcare clinicians’ decisions and recommendations. The CI can help clinicians answer the question, ‘How do I need to tailor my communication and approach to health care interventions, so they are practical and applicable to this specific person’s everyday social context?’ Ultimately, this structure helps the clinician formulate interventions and treatment plans germane to, and inclusive of, the patient’s context. The method of this interview approach is meant to be conversational, yet intentional. While the clinician is gathering information regarding the patient’s context with a sense of curiosity, they are simultaneously using this information to guide their subsequent questions. The CI can be applied generally (i.e., health maintenance visit) or can connect to a specific presenting problem. Relevant information related to the CI may also be collected over multiple visits, and importantly needs to be revisited routinely over time. While the LWPHb mnemonic anchors the interview, the other essential element of the CI is actively listening for social health challenges, barriers, and disparities, guided by the ACCESS-V mnemonic , which was developed by two of the authors for clinical training purpose . The ACCESS-V mnemonic covers well-established and deeply researched social health factors such as exposures to adverse childhood events (ACEs) ; cultural considerations ; internal events related to health and wellbeing (i.e., thoughts, emotions, associations, memories, physical sensations) ; external circumstances, situations, or relationships (e.g., current hardships and traumas, social support) ; exposures to SDoH and effects of structural oppression ; key information about motivation and stages of change related to health behaviors ; and finally the patient’s values and what matters most to them. All of these factors may influence a person’s ability to access healthcare, understand and comprehend health-related information, make treatment decisions, establish trust and confidence in the medical system, follow through with treatment plans and recommendations, meet their health goals, engage in preventative health care, and manage their chronic health conditions. The CI changes information-gathering from a static and disease-focused process to a dynamic and contextually focused process, which allows the patient’s biopsychosocial context to contribute to treatment decisions and iterate subsequent treatment plans based on new contextual information. Patient interviewing needs to evolve to more accurately reflect the influence of structural, circumstantial, and environmental factors on health, illness, and engagement in healthcare. This paper encourages the adoption of a patient-interviewing framework that prioritizes appreciation of a patient’s social context, the understanding that this information is dynamic, and the utilization of contextual data to inform more patient-centered and context-sensitive treatments. Our review of the ACGME Milestones supports the need for patient-clinician interviewing and communication skills that target social context. While the CI has primarily been used by mental health clinicians working in integrated healthcare settings, we are advocating for educational research to promote the use of the CI in undergraduate and graduate medical education and to clarify core learning objectives and metrics. These objectives will then contribute to the development of training materials (e.g., case studies, workshops, observation checklists) for CI skill development within medical education. Research can then shed light on the degree to and manner by which CI training affects learner skill acquisition and competency development, as well as evaluating the response of patients who are exposed to the CI. The dissemination of CI practice may especially benefit patients whose physical and mental health-related challenges are affected by structures of oppression like racism and social forces of exclusion based on race, ethnicity, religion, sexual orientation, gender identity, etc. Educational research is needed to determine the extent to which CI training can help meet the goals of inclusive and anti-racist initiatives within medical education. The core elements of the CI emphasize eliciting patients’ unique social context, thereby acknowledging that not all individuals living in the U.S. experience the same provision of healthcare services. We must honor the complexity that undergirds our patients’ vulnerability to certain health behaviors, barriers, and outcomes. One approach is to sew the concept of knowing a patient’s social context into the fabric of one of the most basic skills of being a healthcare clinician: patient interviewing.
Comparison of Dental Care Visits Before and After Adoption of a Policy to Expand the Dental Workforce in Minnesota
0c6d7d8c-29bb-405a-b3c2-20fe5279e4af
8933736
Dental[mh]
Despite policy efforts to reduce disparities in oral health, uptake of dental services remains low for low-income and minority populations. , A long-standing concern is the capacity of the current dental delivery system to serve the needs of vulnerable populations. , In 2020, it was estimated that nearly 60 million US residents lived in areas with dental health professional shortages, indicating a severe maldistribution of the dental workforce. Increasing the scope of practice of existing health care professionals or adding new types of clinicians is a core state policy strategy to address clinician shortages and enhance access to health care. In the area of oral health, dozens of states have liberalized the scope of practice for the long-established occupation of dental hygiene, whereas other states have sought to expand the dental workforce to include dental therapists. Dental therapists are primary dental care providers who can evaluate and treat basic dental conditions under the supervision of a dentist. Dental therapists were first introduced in New Zealand in 1921 and have been practicing in more than 50 countries. In the United States, the Alaska Native Tribal Health Consortium was the first body to authorize dental therapists to practice in 2004; as of 2020, 13 states and tribal nations had authorized dental therapists, and many others are considering it. , , , The primary aim of adding dental therapists to the dental team is to improve the affordability of dental care to expand access for underserved populations, particularly low-income and uninsured individuals in rural and tribal areas. Critics of the policy have raised concerns about the quality of care provided by dental therapists and the overall influence on population oral health. , , , Although a number of rigorous studies have demonstrated the clinical competence, patient acceptance, and cost-effectiveness of dental therapists, evidence regarding their influence on dental service use, particularly in the United States, remains sparse. , , , This situation is in part due to the small number of individuals practicing (approximately 150 in all) with the majority of dental therapists practicing in Minnesota. Regardless of the debate over dental therapists, the adoption of this model is increasing, and it is emblematic of states’ willingness to adopt scope-of-practice policies as a key strategy to eliminate disparities in oral health care access. Seven states recently authorized dental therapy; Arizona and Michigan adopted the policy in 2018, and Connecticut, Idaho, Montana, Nevada, and New Mexico adopted it in 2019. In 2009, Minnesota authorized dental therapists to practice in underserved communities, including health professional shortage areas and in settings with at least 50% of patients with Medicaid coverage or uninsured. , The aim of this study was to examine whether authorizing dental therapists was associated with improved dental care use. Our hypothesis was that dental therapists, although few in terms of measuring direct influence, can be used as a proxy for gauging attention to the issue and willingness to innovate at the state level. We took advantage of a natural policy experiment created by state variations in adopting the policy to compare changes in access to dental care in Minnesota with nonauthorization states. We examined changes among adults overall as well as among low-income populations, including Medicaid-eligible adults. Study Design This cross-sectional study used a synthetic control approach to construct a counterfactual control population to estimate the association of authorizing dental therapists in Minnesota with dental care use after the authorization of the policy. The synthetic control method uses a data-driven procedure using data on the outcome and its predictors from before the intervention to create a weighted average of the control units (the synthetic control) that resembles the intervention group in the preintervention period. , This process enables comparison of changes in dental visits in Minnesota after the policy to what would have happened there in the absence of the policy. The synthetic control approach is similar to the difference-in-differences design, which is a common quasi-experimental design used to examine policy effects. However, it requires fewer assumptions and also controls for unmeasured time-varying factors. Data and Study Sample We used data from the Behavioral Risk Factor Surveillance System (BRFSS). The BRFSS is the largest household annual telephone survey in the world and collects information on participants’ health conditions, health-related risk behaviors, and use of preventive services. We used data from 2006 to 2018 from the BRFSS that includes oral health information. Oral health–related questions are included in the BRFSS every other year. Thus, the study period included BRFSS surveys from 2006 and 2008 (prepolicy years) and 2010, 2012, 2014, 2016, and 2018 (postpolicy years). Participants self-identified their race and ethnicity, which we used to examine racial and ethnic disparities in dental care use. Because we had to create a synthetic control for each comparison, we collapsed non-White racial and ethnic groups into 1 group, non-White. We excluded from this analysis other states that authorized dental therapists during the study period: Alaska, Arizona, Maine, Michigan, Oregon, Vermont, and Washington. We examined changes in the full sample, low-income population, and Medicaid population. The full sample included all adults aged 18 years or older. The low-income sample included adults aged 18 years or older with family income below 200% of the federal poverty level. The Medicaid population was limited to the Medicaid-eligible group and thus included adults aged 19 to 64 years with family income up to 138% of the federal poverty level. We estimated the percentage of the federal poverty level according to household size, family income, and the federal poverty guideline for each year. Our study outcome was measured with a self-reported binary indicator for whether a respondent had visited a dentist or a dental clinic in the past 12 months. This study used deidentified data and was determined not to be human participant research by the institutional review board of the Harvard Faculty of Medicine; informed patient consent was therefore not obtained. We followed the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guideline. Statistical Analysis We created separate synthetic controls for each group examined (full sample, low-income adults, and Medicaid-eligible adults). For the full sample and low-income samples, the donor pool included 42 states, as well as the District of Columbia, that did not authorize dental therapists during the study period (Alabama, Arkansas, California, Colorado, Connecticut, Delaware, Florida, Georgia, Hawaii, Idaho, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maryland, Massachusetts, Mississippi, Missouri, Montana, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Carolina, North Dakota, Ohio, Oklahoma, Pennsylvania, Rhode Island, South Carolina, South Dakota, Tennessee, Texas, Utah, Virginia, West Virginia, Wisconsin, and Wyoming). The synthetic control method relies on the assumption that the treated and donor pool units are similar. , Therefore, to construct a suitable synthetic control for the Medicaid sample, we restricted the donor pool to nonadopting states that provide adult dental benefits in Medicaid because Minnesota provides coverage of adult dental benefits through Medicaid. We defined states offering more than emergency dental services to adults through Medicaid as providing dental benefits. , , Accordingly, in the Medicaid analysis, we excluded from the donor pool states that do not cover adult dental services or states that changed their coverage of adult dental benefits through Medicaid during the study period. The donor pool for the Medicaid sample included the following 18 states and the District of Columbia: Arkansas, Connecticut, Indiana, Iowa, Kentucky, Massachusetts, Nebraska, New Jersey, New Mexico, New York, North Carolina, North Dakota, Ohio, Pennsylvania, Rhode Island, South Dakota, Wisconsin, and Wyoming. In addition, we conducted subgroup analysis to examine racial and ethnic disparities in access to dental care and to assess whether the policy was differentially associated with changes in care among adults in racial and ethnic minority groups. Therefore, we constructed additional synthetic controls for White and non-White individuals in each population examined. The synthetic control method used an optimization procedure with data on outcome trends and predictor variables in the preintervention period to construct a weighted average of control states from the donor pool to closely match Minnesota before the policy implementation. We included trends in dental care use and several variables associated with dental care use, including age, age squared, education, race and ethnicity, and the number of dentists per capita in each state. As a sensitivity analysis, we included other variables associated with dental care use, such as health insurance status, unemployment rate, and rurality, but these variables did not affect or improve the preintervention fit. To assess the goodness of fit of the synthetic control, we examined prepolicy trends in dental care use in Minnesota and the synthetic control by visually inspecting trends in the preintervention period. We also calculated the root mean square prediction error in the preintervention period, which measured the difference in the path of the outcome between Minnesota and its synthetic control ; thus, a small error indicated a good fit between the treatment unit and its synthetic control. To compare changes between Minnesota and the synthetic control after the policy change, we used Taylor series linearization to estimate differences in having a dental visit and to calculate the 95% CIs, similar to prior literature using synthetic control methods. , Finally, as a robustness check for our study design, we performed a placebo test. , , Here, we repeated the synthetic control analysis but treated each state in the donor pool as the treatment unit. The difference between the actual treated unit and its synthetic control should be larger than that of most donor states in the posttreatment period. , , We used Stata, version 15.2 (StataCorp LLC), including synth_runner and allsynth packages, for all analyses. , , , We used BRFSS survey weights to account for the survey design. Statistical significance was based on 2-sided P ≤ .05, which we calculated with Stata’s postestimation margins options. Data were analyzed from June 1, 2021, to December 18, 2021. This cross-sectional study used a synthetic control approach to construct a counterfactual control population to estimate the association of authorizing dental therapists in Minnesota with dental care use after the authorization of the policy. The synthetic control method uses a data-driven procedure using data on the outcome and its predictors from before the intervention to create a weighted average of the control units (the synthetic control) that resembles the intervention group in the preintervention period. , This process enables comparison of changes in dental visits in Minnesota after the policy to what would have happened there in the absence of the policy. The synthetic control approach is similar to the difference-in-differences design, which is a common quasi-experimental design used to examine policy effects. However, it requires fewer assumptions and also controls for unmeasured time-varying factors. We used data from the Behavioral Risk Factor Surveillance System (BRFSS). The BRFSS is the largest household annual telephone survey in the world and collects information on participants’ health conditions, health-related risk behaviors, and use of preventive services. We used data from 2006 to 2018 from the BRFSS that includes oral health information. Oral health–related questions are included in the BRFSS every other year. Thus, the study period included BRFSS surveys from 2006 and 2008 (prepolicy years) and 2010, 2012, 2014, 2016, and 2018 (postpolicy years). Participants self-identified their race and ethnicity, which we used to examine racial and ethnic disparities in dental care use. Because we had to create a synthetic control for each comparison, we collapsed non-White racial and ethnic groups into 1 group, non-White. We excluded from this analysis other states that authorized dental therapists during the study period: Alaska, Arizona, Maine, Michigan, Oregon, Vermont, and Washington. We examined changes in the full sample, low-income population, and Medicaid population. The full sample included all adults aged 18 years or older. The low-income sample included adults aged 18 years or older with family income below 200% of the federal poverty level. The Medicaid population was limited to the Medicaid-eligible group and thus included adults aged 19 to 64 years with family income up to 138% of the federal poverty level. We estimated the percentage of the federal poverty level according to household size, family income, and the federal poverty guideline for each year. Our study outcome was measured with a self-reported binary indicator for whether a respondent had visited a dentist or a dental clinic in the past 12 months. This study used deidentified data and was determined not to be human participant research by the institutional review board of the Harvard Faculty of Medicine; informed patient consent was therefore not obtained. We followed the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guideline. We created separate synthetic controls for each group examined (full sample, low-income adults, and Medicaid-eligible adults). For the full sample and low-income samples, the donor pool included 42 states, as well as the District of Columbia, that did not authorize dental therapists during the study period (Alabama, Arkansas, California, Colorado, Connecticut, Delaware, Florida, Georgia, Hawaii, Idaho, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maryland, Massachusetts, Mississippi, Missouri, Montana, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Carolina, North Dakota, Ohio, Oklahoma, Pennsylvania, Rhode Island, South Carolina, South Dakota, Tennessee, Texas, Utah, Virginia, West Virginia, Wisconsin, and Wyoming). The synthetic control method relies on the assumption that the treated and donor pool units are similar. , Therefore, to construct a suitable synthetic control for the Medicaid sample, we restricted the donor pool to nonadopting states that provide adult dental benefits in Medicaid because Minnesota provides coverage of adult dental benefits through Medicaid. We defined states offering more than emergency dental services to adults through Medicaid as providing dental benefits. , , Accordingly, in the Medicaid analysis, we excluded from the donor pool states that do not cover adult dental services or states that changed their coverage of adult dental benefits through Medicaid during the study period. The donor pool for the Medicaid sample included the following 18 states and the District of Columbia: Arkansas, Connecticut, Indiana, Iowa, Kentucky, Massachusetts, Nebraska, New Jersey, New Mexico, New York, North Carolina, North Dakota, Ohio, Pennsylvania, Rhode Island, South Dakota, Wisconsin, and Wyoming. In addition, we conducted subgroup analysis to examine racial and ethnic disparities in access to dental care and to assess whether the policy was differentially associated with changes in care among adults in racial and ethnic minority groups. Therefore, we constructed additional synthetic controls for White and non-White individuals in each population examined. The synthetic control method used an optimization procedure with data on outcome trends and predictor variables in the preintervention period to construct a weighted average of control states from the donor pool to closely match Minnesota before the policy implementation. We included trends in dental care use and several variables associated with dental care use, including age, age squared, education, race and ethnicity, and the number of dentists per capita in each state. As a sensitivity analysis, we included other variables associated with dental care use, such as health insurance status, unemployment rate, and rurality, but these variables did not affect or improve the preintervention fit. To assess the goodness of fit of the synthetic control, we examined prepolicy trends in dental care use in Minnesota and the synthetic control by visually inspecting trends in the preintervention period. We also calculated the root mean square prediction error in the preintervention period, which measured the difference in the path of the outcome between Minnesota and its synthetic control ; thus, a small error indicated a good fit between the treatment unit and its synthetic control. To compare changes between Minnesota and the synthetic control after the policy change, we used Taylor series linearization to estimate differences in having a dental visit and to calculate the 95% CIs, similar to prior literature using synthetic control methods. , Finally, as a robustness check for our study design, we performed a placebo test. , , Here, we repeated the synthetic control analysis but treated each state in the donor pool as the treatment unit. The difference between the actual treated unit and its synthetic control should be larger than that of most donor states in the posttreatment period. , , We used Stata, version 15.2 (StataCorp LLC), including synth_runner and allsynth packages, for all analyses. , , , We used BRFSS survey weights to account for the survey design. Statistical significance was based on 2-sided P ≤ .05, which we calculated with Stata’s postestimation margins options. Data were analyzed from June 1, 2021, to December 18, 2021. The full sample included 2 613 716 adults, the low-income sample included 570 487 adults, and the Medicaid sample included 97 383 adults. The mean age of the sample at baseline was 46.0 years (SD, 17.7 years); 396 501 were women (weighted percentage, 51.3%) and 241 250 were men (weighted percentage, 48.7%); and 503 197 were White (weighted percentage, 67.9%), 54 568 were Black (weighted percentage, 10.1%), 39 282 were Hispanic (weighted percentage, 14.5%), and 34 739 were other race (weighted percentage, 6.7%). The prevalence of dental visits in Minnesota before authorization of dental therapists was 76.2% (95% CI, 75.0%-77.4%) in the full sample, 61.5% (95% CI, 58.4%-64.6%) among low-income adults, and 58.4% (95% CI, 53.0%-63.5%) among Medicaid-eligible adults. Synthetic Control Goodness of Fit States contributed differently to each synthetic control. In the full sample (eTable 1 in the ), Rhode Island and Wisconsin contributed almost equally to the synthetic control (analytic weights were 0.521 and 0.479, respectively). Wisconsin had the largest weight in the White subpopulation (0.657), and Rhode Island had the largest weight in the non-White subpopulation (0.695). In the low-income sample (eTable 2 in the ), Rhode Island had the largest weight for all adults (0.754) and for the White subpopulation (0.593). In the non-White subpopulation, Massachusetts and North Dakota contributed similarly (0.594 and 0.406, respectively) (eTable 2 in the ). In the Medicaid sample (eTable 3 in the ), Massachusetts had the largest weight for the population of all adults (0.372) and for the non-White subpopulation (0.830). Rhode Island had the largest weight for the White subsample (0.266). The mean rate values in the prepolicy period for the prevalence of dental care use and its predictors in Minnesota, synthetic Minnesota, and the average of all control states in each donor pool are shown in eTables 4, 5, and 6 in the . The tables indicate that Minnesota matched the synthetic controls well in terms of the prevalence of dental visits and most predictors in the prepolicy period. For example, in the full sample before the policy adoption, the prevalence of dental visits was much lower in the average control group than in Minnesota. The synthetic control, in contrast, provided values similar to those of actual Minnesota. There was some divergence between Minnesota and the synthetic control in the Hispanic composition for the low-income and Medicaid samples. Trends in the prevalence of reporting a dental visit in the previous year in Minnesota and the synthetic control over time are shown for the full sample ( ), low-income sample ( ), and Medicaid sample ( ). The figures show that, in the prepolicy period (2006-2008), Minnesota closely tracked the synthetic control, indicating an overall good fit and thus providing a suitable counterfactual control population. After 2008, the trends in dental visits diverged, with higher prevalence in dental visits in Minnesota compared with the synthetic control until 2016, when the prevalence in Minnesota decreased below its respective synthetic control, particularly in the low-income and Medicaid groups. The root mean square prediction error for the full, low-income, and Medicaid samples in the preintervention period were all small, suggesting low error and a good preintervention fit between Minnesota and each synthetic control (eTables 1-3 in the ). Changes in Access to Dental Care Full Sample Authorizing dental therapists in Minnesota was associated with an increase of 2.3 percentage points (95% CI, 1.5-3.1 percentage points) in the prevalence of having a dental visit in Minnesota relative to its synthetic control, a relative increase of 3.2% (95% CI, 2.0%-4.4%) ( ). In the subgroup analysis, we estimated an increase of 2.0 percentage points (95% CI, 1.0-2.9 percentage points) in dental visits among White adults and an increase of 5.0 percentage points (95% CI, 3.5-6.5 percentage points) among non-White adults associated with the policy for 2.6% (95% CI, 1.4%-3.9%) and 8.4% (95% CI, 5.8%-11.0%) relative increases, respectively. However, in the placebo analysis, the magnitude of the difference for several placebo states (16 of 43 tests) was larger than what we estimated for Minnesota compared with its synthetic control (eFigure 1 in the ). Therefore, our estimates from the synthetic control did not provide significant evidence for changes in dental visits for the full sample after policy implementation. Low-Income Adults The absolute difference in the prevalence of dental visits between Minnesota and its synthetic control after policy implementation was 7.3 percentage points (95% CI, 5.0-9.5 percentage points), a relative increase of 12.5% (95% CI, 8.6%-16.4%). In the subgroup analysis, we estimated that authorizing dental therapists was associated with an increase of 10.8 percentage points (95% CI, 8.5-13.0 percentage points) in dental visits for White adults (relative increase, 19.8%; 95% CI, 15.6%-24.1%). However, we did not detect any significant changes for non-White adults. The synthetic control estimates were robust in placebo analysis, indicating significant improvements in dental visits after the adoption of dental therapy in Minnesota until 2014. No other state in the donor pool demonstrated a greater gap than what we observed in Minnesota in 2010 (0 of 43 states), and only a small number of states had a larger gap in 2012 (2 of 43 states) and 2014 (6 of 43 states) (eFigure 2 in the ). Medicaid-Eligible Adults Authorizing dental therapists in Minnesota was associated with an increase of 6.2 percentage points (95% CI, 2.4-10.0 percentage points) in having a dental visit for all adults and an increase of 13.5 percentage points (95% CI, 9.1-17.9 percentage points) for White adults (10.5% [95% CI, 3.9%-17.0%] and 25.5% [95% CI, 17.0%-34.1%] relative increase, respectively). Results from the placebo test are consistent with our observed synthetic control estimates, suggesting that our findings are unlikely to be due to chance. The magnitude of the gap between Minnesota and its synthetic control was larger than that of all other states in the donor pool until 2012 (0 of 19 states), and only a few states had a larger gap in 2014 (6 of 19 states) (eFigure 3 in the ). States contributed differently to each synthetic control. In the full sample (eTable 1 in the ), Rhode Island and Wisconsin contributed almost equally to the synthetic control (analytic weights were 0.521 and 0.479, respectively). Wisconsin had the largest weight in the White subpopulation (0.657), and Rhode Island had the largest weight in the non-White subpopulation (0.695). In the low-income sample (eTable 2 in the ), Rhode Island had the largest weight for all adults (0.754) and for the White subpopulation (0.593). In the non-White subpopulation, Massachusetts and North Dakota contributed similarly (0.594 and 0.406, respectively) (eTable 2 in the ). In the Medicaid sample (eTable 3 in the ), Massachusetts had the largest weight for the population of all adults (0.372) and for the non-White subpopulation (0.830). Rhode Island had the largest weight for the White subsample (0.266). The mean rate values in the prepolicy period for the prevalence of dental care use and its predictors in Minnesota, synthetic Minnesota, and the average of all control states in each donor pool are shown in eTables 4, 5, and 6 in the . The tables indicate that Minnesota matched the synthetic controls well in terms of the prevalence of dental visits and most predictors in the prepolicy period. For example, in the full sample before the policy adoption, the prevalence of dental visits was much lower in the average control group than in Minnesota. The synthetic control, in contrast, provided values similar to those of actual Minnesota. There was some divergence between Minnesota and the synthetic control in the Hispanic composition for the low-income and Medicaid samples. Trends in the prevalence of reporting a dental visit in the previous year in Minnesota and the synthetic control over time are shown for the full sample ( ), low-income sample ( ), and Medicaid sample ( ). The figures show that, in the prepolicy period (2006-2008), Minnesota closely tracked the synthetic control, indicating an overall good fit and thus providing a suitable counterfactual control population. After 2008, the trends in dental visits diverged, with higher prevalence in dental visits in Minnesota compared with the synthetic control until 2016, when the prevalence in Minnesota decreased below its respective synthetic control, particularly in the low-income and Medicaid groups. The root mean square prediction error for the full, low-income, and Medicaid samples in the preintervention period were all small, suggesting low error and a good preintervention fit between Minnesota and each synthetic control (eTables 1-3 in the ). Full Sample Authorizing dental therapists in Minnesota was associated with an increase of 2.3 percentage points (95% CI, 1.5-3.1 percentage points) in the prevalence of having a dental visit in Minnesota relative to its synthetic control, a relative increase of 3.2% (95% CI, 2.0%-4.4%) ( ). In the subgroup analysis, we estimated an increase of 2.0 percentage points (95% CI, 1.0-2.9 percentage points) in dental visits among White adults and an increase of 5.0 percentage points (95% CI, 3.5-6.5 percentage points) among non-White adults associated with the policy for 2.6% (95% CI, 1.4%-3.9%) and 8.4% (95% CI, 5.8%-11.0%) relative increases, respectively. However, in the placebo analysis, the magnitude of the difference for several placebo states (16 of 43 tests) was larger than what we estimated for Minnesota compared with its synthetic control (eFigure 1 in the ). Therefore, our estimates from the synthetic control did not provide significant evidence for changes in dental visits for the full sample after policy implementation. Low-Income Adults The absolute difference in the prevalence of dental visits between Minnesota and its synthetic control after policy implementation was 7.3 percentage points (95% CI, 5.0-9.5 percentage points), a relative increase of 12.5% (95% CI, 8.6%-16.4%). In the subgroup analysis, we estimated that authorizing dental therapists was associated with an increase of 10.8 percentage points (95% CI, 8.5-13.0 percentage points) in dental visits for White adults (relative increase, 19.8%; 95% CI, 15.6%-24.1%). However, we did not detect any significant changes for non-White adults. The synthetic control estimates were robust in placebo analysis, indicating significant improvements in dental visits after the adoption of dental therapy in Minnesota until 2014. No other state in the donor pool demonstrated a greater gap than what we observed in Minnesota in 2010 (0 of 43 states), and only a small number of states had a larger gap in 2012 (2 of 43 states) and 2014 (6 of 43 states) (eFigure 2 in the ). Medicaid-Eligible Adults Authorizing dental therapists in Minnesota was associated with an increase of 6.2 percentage points (95% CI, 2.4-10.0 percentage points) in having a dental visit for all adults and an increase of 13.5 percentage points (95% CI, 9.1-17.9 percentage points) for White adults (10.5% [95% CI, 3.9%-17.0%] and 25.5% [95% CI, 17.0%-34.1%] relative increase, respectively). Results from the placebo test are consistent with our observed synthetic control estimates, suggesting that our findings are unlikely to be due to chance. The magnitude of the gap between Minnesota and its synthetic control was larger than that of all other states in the donor pool until 2012 (0 of 19 states), and only a few states had a larger gap in 2014 (6 of 19 states) (eFigure 3 in the ). Authorizing dental therapists in Minnesota was associated with an increase of 2.3 percentage points (95% CI, 1.5-3.1 percentage points) in the prevalence of having a dental visit in Minnesota relative to its synthetic control, a relative increase of 3.2% (95% CI, 2.0%-4.4%) ( ). In the subgroup analysis, we estimated an increase of 2.0 percentage points (95% CI, 1.0-2.9 percentage points) in dental visits among White adults and an increase of 5.0 percentage points (95% CI, 3.5-6.5 percentage points) among non-White adults associated with the policy for 2.6% (95% CI, 1.4%-3.9%) and 8.4% (95% CI, 5.8%-11.0%) relative increases, respectively. However, in the placebo analysis, the magnitude of the difference for several placebo states (16 of 43 tests) was larger than what we estimated for Minnesota compared with its synthetic control (eFigure 1 in the ). Therefore, our estimates from the synthetic control did not provide significant evidence for changes in dental visits for the full sample after policy implementation. The absolute difference in the prevalence of dental visits between Minnesota and its synthetic control after policy implementation was 7.3 percentage points (95% CI, 5.0-9.5 percentage points), a relative increase of 12.5% (95% CI, 8.6%-16.4%). In the subgroup analysis, we estimated that authorizing dental therapists was associated with an increase of 10.8 percentage points (95% CI, 8.5-13.0 percentage points) in dental visits for White adults (relative increase, 19.8%; 95% CI, 15.6%-24.1%). However, we did not detect any significant changes for non-White adults. The synthetic control estimates were robust in placebo analysis, indicating significant improvements in dental visits after the adoption of dental therapy in Minnesota until 2014. No other state in the donor pool demonstrated a greater gap than what we observed in Minnesota in 2010 (0 of 43 states), and only a small number of states had a larger gap in 2012 (2 of 43 states) and 2014 (6 of 43 states) (eFigure 2 in the ). Authorizing dental therapists in Minnesota was associated with an increase of 6.2 percentage points (95% CI, 2.4-10.0 percentage points) in having a dental visit for all adults and an increase of 13.5 percentage points (95% CI, 9.1-17.9 percentage points) for White adults (10.5% [95% CI, 3.9%-17.0%] and 25.5% [95% CI, 17.0%-34.1%] relative increase, respectively). Results from the placebo test are consistent with our observed synthetic control estimates, suggesting that our findings are unlikely to be due to chance. The magnitude of the gap between Minnesota and its synthetic control was larger than that of all other states in the donor pool until 2012 (0 of 19 states), and only a few states had a larger gap in 2014 (6 of 19 states) (eFigure 3 in the ). Despite previous research demonstrating the effectiveness of dental therapists in the United States, , , little is yet known about the policy influence on populations’ access to dental care. Much of the evidence is based on small observational studies limited to a single state. , Using a synthetic control method and nationally representative data, we examined the association between authorizing dental therapists and dental visits by comparing Minnesota with a synthetic control. We found that authorizing dental therapy was associated with increases in dental visits among low-income and Medicaid-eligible adults overall. We also found that the adoption of the policy was associated with an increase in dental care use for White adults without corresponding increases among other racial and ethnic groups. These findings provide new evidence on the association between authorizing dental therapists and access to dental care. Among the 13 states that have authorized dental therapists, 8 are still in the implementation stage and do not have any dental therapists in practice. Recent estimates suggest there are approximately 150 dental therapists practicing in the United States, with nearly 100 therapists practicing in Minnesota. , The increases in dental use observed in our study may be owing to a combination of a direct association with the newly deployed workforce and a positive spillover effect of the policy on dentists. In addition to a dentist shortage in underserved regions, low dentist participation in Medicaid creates a major barrier to accessing dental care among Medicaid beneficiaries. , We speculate that authorization of dental therapists created competition in the dental industry, encouraging more dentists to be willing to treat low-income and underserved populations. Our findings suggest that barriers to obtaining dental care remain a significant challenge for minority populations and underscore the importance of predisposing factors on the use of health care services. Estimates from our analysis indicated significant improvements in having a dental visit in the past year associated with the policy but only for White adults. These findings have important implications for state policies and are likely associated with structural racism in health care, such as differential distribution and segregation of dental clinics. Our study extends previous findings regarding the influence of expanding the scope of practice or adding new health care professionals on health care delivery. Several studies have shown that nurse practitioners play an important role in improving access and health outcomes for underserved populations, particularly in rural areas. , In addition, there is evidence indicating greater acceptance of Medicaid beneficiaries in health care settings that include nurse practitioners. Other studies have also suggested that expanding dental hygienists’ scope of practice facilitates the delivery of preventive oral health services, leading to better population oral health. , Successful integration of dental therapists into dental care delivery systems requires a collaborative effort between policy makers, clinicians, and dental educators. As the availability of training programs and the number of dental therapists increase, future research should continue to track and monitor the influence of this evolving workforce model on population oral health. Limitations This study has limitations. We measured access to dental care by using only 1 self-reported question, which is susceptible to both recall and social desirability biases. In addition, the survey question asked whether participants visited a dental clinic and not whether they had consulted a dental therapist. Therefore, in Minnesota, some respondents who were treated by a dental therapist might have answered no to the question about visiting a dentist. Hence, we might have slightly underestimated the actual number of people who received dental care (albeit from a dental therapist). In addition, we did not assess the association between adopting dental therapy and oral health outcomes. As more data become available, future research should assess changes in use and clinical measures of oral health associated with the policy. Finally, estimates from the synthetic comparison involved few states; for example, in the full sample, only 2 states contributed to the synthetic control. This limitation may have reduced the presumed benefit of synthesis for generating a control estimate that averages the noise associated with interstate variation. This study has limitations. We measured access to dental care by using only 1 self-reported question, which is susceptible to both recall and social desirability biases. In addition, the survey question asked whether participants visited a dental clinic and not whether they had consulted a dental therapist. Therefore, in Minnesota, some respondents who were treated by a dental therapist might have answered no to the question about visiting a dentist. Hence, we might have slightly underestimated the actual number of people who received dental care (albeit from a dental therapist). In addition, we did not assess the association between adopting dental therapy and oral health outcomes. As more data become available, future research should assess changes in use and clinical measures of oral health associated with the policy. Finally, estimates from the synthetic comparison involved few states; for example, in the full sample, only 2 states contributed to the synthetic control. This limitation may have reduced the presumed benefit of synthesis for generating a control estimate that averages the noise associated with interstate variation. Evidence from this study suggests that expanding the dental workforce to include dental therapists in Minnesota shows associated improvements in access to dental care among disadvantaged populations. Our study also strengthens the evidence on persistent racial and ethnic disparities in access to dental care. As more dental therapists begin practice, it remains important to conduct more research to examine mechanisms by which workforce policies can improve access to dental care to meet the oral health needs of underserved communities.
A Roadmap for the Student Pursuing a Career in Pediatric Emergency Medicine
12c0c288-f66b-49d3-9802-2e698ff5df7e
6948678
Pediatrics[mh]
Many physicians have elected to pursue career pathways focused on the care of children in the emergency setting. Focused training toward such a profession began in a non-accredited format in the early 1980s. In 1987, the American Board of Emergency Medicine (ABEM) and the American Board of Pediatrics (ABP) published guidelines to combined training in their two specialties. Later, the ABP developed a pediatric emergency medicine (PEM) fellowship-training track with ABEM participation, and the first sub-board certification exam was offered in 1992. As a result, there are three distinct training pathways available to medical students considering a career in pediatric emergency medicine (EM): a three-year pediatric residency followed by a three-year PEM fellowship (Peds-PEM); a three- to four-year EM residency followed by a two-year PEM fellowship (EM-PEM); and a five-year combined EM and pediatrics residency (EM&Peds). All three pathways provide pediatric emergency care training in excess of what is required by the Accreditation Council for Graduate Medical Education (ACGME) training guidelines for both EM and pediatric residencies. – Peds-PEM and EM-PEM pathways were established in the 1990s by the ABP in conjunction with ABEM. EM&Peds guidelines were first described in a joint position statement by ABEM and ABP originally published in 1987 and were recently updated in 2016. – The specific requirements for Peds-PEM, EM-PEM and EM&Peds are detailed in . Residency and fellowship programs are given autonomy by the ACGME to design their curricula in accordance with the resources of their institutions and the needs of their residents or fellows within approved training guidelines. – All pathways impart pediatric emergency care expertise but with differences in core training content that lead to a variation in clinical practice. Given the five-to-six-year training commitment, it is crucial that medical students considering a career in pediatric EM understand the nuances of each pathway prior to the National Resident Matching Program submission deadlines. We present a curriculum analysis that aims to elucidate the different clinical trajectories of each pathway and aid in appropriate selection for the individual student’s career goals. We obtained a list of currently credentialed Peds-PEM, EM-PEM, and EM&Peds programs from the ACGME website in January 2018. Each program’s curriculum was obtained from its official website. Programs with insufficient curriculum or no curriculum posted on their websites were excluded. We analyzed dedicated educational units (EU) regarding time spent in the emergency department (ED) (adult or not specified between adult & pediatric), pediatric-only ED, critical care (including adult medical, trauma, pediatric, and neonatal), and research. These were chosen as they are the most common for comparison purposes and make up the majority of EUs in each of the three pathways. The ACGME considers 12-month/year and 13-block/year EUs to be equivalent. EUs split between two experiences were assigned 0.5 EU to each area. Results were averaged for each of the three training pathways. Educational Research Capsule Summary What do we already know about this issue? There are three pathways toward a career in pediatric emergency medicine (EM), each with its own strengths and limitations . What was the research question? What are the differences between pediatric EM training pathways that students pursuing this career should understand? What was the major finding of the study? The curriculum of each pathway with their noted strengths can be used to guide students toward their ultimate desired careers . How does this improve population health? Students can identify their career path early in their training toward future careers focused on the care of children in the emergency setting . Peds-PEM and EM-PEM graduates were assumed to have fulfilled the minimum ACGME-required EUs during their primary residency and these were added to each fellowship’s totals prior to averaging. These include three EU pediatric-only ED and four EU critical care required to complete a pediatric residency, and five EU pediatric ED and four EU critical care required to complete an EM residency. Dedicated EUs spent in the ED for primary EM residents were assumed to be 21.8 based on the mean number for three-year EM residencies published in 2015 by Stowell et al. We used Google Sheets (Google, Mountain View, CA) to tabulate and average the totals. We identified a total of 113 active programs from the ACGME website (75 Peds-PEM; 34 EM-PEM; 4 EM&Peds). Of these, 64 Peds-PEM (85%) programs, 29 EM-PEM (85%), and 4 Peds-EM (100%) had published curricula on their official websites and were included. Results are tabulated in . The EM-PEM training track demonstrated the most overall dedicated ED EUs (35.4) followed by EM&Peds (26.1) and Peds-PEM (20.4). Peds-PEM graduates have the most dedicated pediatric-only ED EUs (20.1) followed by EM-PEM (18.3) and EM&Peds (8.0). EM&Peds graduates have the most dedicated critical care EUs (10) followed by EM-PEM (6.5) and Peds-PEM (5.8). Peds-PEM graduates have the most dedicated research EUs (9.0) followed by EM-PEM (3.3), and EM&Peds (0.3). As all three pathways offer pediatric emergency training beyond what the primary residencies of EM or pediatrics offer, it is the route taken that will most affect the ultimate career options. The overlapping strengths of each provide an environment for the graduates of each training pathway to gain sufficient experience in both acute and critical care of the pediatric patient. The strengths and potential limitations of each pathway are highlighted in . Pediatrics-Pediatric Emergency Medicine Pathway The Peds-PEM pathway offers the most overall training in pediatrics with a foundation of ambulatory and inpatient care in the primary residency followed by specialty training in pediatric emergency care during the fellowship. This requires both a residency and a fellowship match. The Peds-PEM route aims to train pediatricians first and then focus them into pediatric emergency physicians through a large amount of time spent in the pediatric ED with a targeted exposure to adult EM as required by the ACGME. There is a heavy focus on dedicated research time compared to the other pathways. Peds-PEM graduates are eligible for the PEM sub-boards co-sponsored by ABEM and ABP. The clinical scope of Peds-PEM is limited to patients <21 years of age, making children’s hospitals or EDs with a high pediatric volume the ideal career for these graduates. However, these age limits are noted to be arbitrary and 21 years is not a firm limit. Still, much of adult EM will be outside the scope of training and hospital privileges afforded to Peds-PEM graduates. Peds-PEM duration of training is six years without variation as described by the ACGME program requirements for PEM fellowships. Emergency Medicine-Pediatric Emergency Medicine Pathway The EM-PEM pathway offers the most overall training in EM with a foundation of emergency and critical care in the primary residency followed by specialty training in pediatric emergency care during the fellowship. This requires both a residency and a fellowship match. The EM-PEM route aims to train emergency physicians first and then to focus them into pediatric emergency physicians through dedicated pediatric ED time and some subspecialty pediatrics. EM-PEM graduates are eligible for the PEM sub-boards co-sponsored by the ABP and the ABEM. The clinical scope of EM-PEM is all ages, although they are limited to an ED practice setting without the potential for pediatric ambulatory or inpatient medicine. EM-PEM duration of training is between five and six years. This is variable as students may choose to pursue a three- or four-year EM primary residency prior to their two-year fellowship. Emergency Medicine and Pediatrics Pathway The EM&Peds pathway offers the broadest training of all pathways with complete training in both general pediatrics and emergency medicine. This requires only a residency match. The EM&Peds route aims to simultaneously train emergency physicians and general pediatricians, resulting in pediatric emergency physicians. Alhough this pathway has the least amount of pediatric-only ED EUs, the philosophy is to learn procedural skills and acute care principles through EM residency training and complete care of the pediatric patient through ambulatory and inpatient pediatric rotations. The result is not only an exposure to all aspects of pediatric and EM care but also a heavier focus on critical care compared to the other pathways. EM&Peds graduates are eligible for dual board certification in both EM and general pediatrics but have not been eligible for the PEM sub-board certification since 1998. Some centers, predominately freestanding children’s hospitals that care only for children, consider PEM sub-board certification a prerequisite which may be a limitation for EM&Peds graduates. However, the versatility of EM&Peds training may be a strength to centers that care for both adults and children. Many EM&Peds graduates work in academic, community, or rural centers. EM&Peds graduates are trained to care for children in ED, ambulatory and inpatient settings. EM&Peds training duration is five years as set by the joint ABP and ABEM agreement. Nuances Between Pathways The EM&Peds physician and the Peds-PEM physician both share the primary pediatric board, allowing eligibility for additional ABP-sponsored fellowship training or shared time as a clinical pediatrician or pediatric hospitalist in addition to their EM practice. Similarly, the EM&Peds physicians and EM-PEM physician share the primary EM board allowing for EM fellowship training potential. Certainly, all of the pathways in pediatric EM provide a background for such physicians to take positions of advocacy and leadership in clinical and academic settings. There are notable differences in the number of physicians trained through each pathway. A 2006 pediatric study referencing the Future of Pediatric Education II data revealed that at the time there were approximately 1300 ABP-certified Peds-PEM practitioners compared to only 170 ABEM-certified EM-PEM practitioners, a proportion that has likely continued to shift to less representation by EM-PEM physicians. In 2007, Murray et al. also showed through a survey of PEM fellowship programs that only 5% of entering fellows had an EM primary board background. More recently, 2018 ABEM data reveals that in 2017, only 40 ABEM-eligible EM-PEM physicians were enrolled in PEM fellowship programs, suggesting that only ~ 20 EM residency graduates enter PEM fellowships annually. The reasons for this are not clear, although recently the ABEM EM to PEM taskforce has sought to address this difference. One possibility is that EM graduates do not seek to be further specialized as acute care of children is already within their scope of practice. Centralization of pediatric emergency care may also lead to fewer opportunities for EM-PEM graduates in community EDs, where PEM fellowship training would not necessarily be more advantageous than general EM training alone. During our research we noted that there were less EM-based PEM fellowships (29) compared to pediatrics-based PEM fellowships (64). Notably, several pediatrics-based programs that published a Peds-PEM curriculum did not publish an EM-PEM curriculum. Although possibly due to omission from their websites, PEM fellowship programs are not required to accept both pediatrics and EM candidates. This may indicate fewer available fellowship opportunities for the EM-PEM pathway compared to Peds-PEM or that EM-PEM trainees are required to complete three years at that fellowship instead of two. Lastly, financial differences may contribute to this issue. PEM-fellowship trained physicians traditionally have a lower salary than general emergency physicians. However, EM&Peds graduates do report making similar salaries to that of their general EM colleages. We speculate that EM-PEM graduates likely make similar salaries to EM and EM&Peds graduates given their capacity to care for adults. We also speculate that salary is more likely related to the practice setting than the training itself, although the training does in part help determine the practice setting. There were 48 postgraduate year 1–5 candidates enrolled in EM&Peds programs in the ABEM dataset, making it the second most common pathway chosen. Still the vast majority of pediatric emergency providers are Peds-PEM, making the EM&Peds pathway less well known by comparison. More research might reveal more subtle differences between the specifics of these training pathways as regards specific procedural experience, patient volume, or other metrics. Choosing a Pathway What may be considered a limitation to one student may be a strength to another. An appropriate starting point may be whether the student wishes to care for adults or only children. Should students not wish to care for adults, a Peds-PEM pathway would be most suitable. If students would like to care for adults, the applicant would be directed toward either EM-PEM or EM&Peds. The difference here is eligibility for the PEM sub-boards and general pediatrics exposure for the EM&Peds graduate. EM-PEM graduates are eligible for sub-board certification, which may increase the likelihood for employment in some children’s hospitals or other centers that require subspecialty certification. EM&Peds graduates are no longer eligible for sub-board certification, which can be a deterrent to certain centers. However, EM&Peds graduates have a much broader scope of practice with the potential for more varied career paths including ambulatory and hospital pediatrics to which EM-PEM graduates do not have access. This may be attractive to centers looking to employ a provider in several clinical areas or departments. Students can certainly blaze their own trail within a given pathway but should be aware and well prepared for the path ahead of them. The Peds-PEM pathway offers the most overall training in pediatrics with a foundation of ambulatory and inpatient care in the primary residency followed by specialty training in pediatric emergency care during the fellowship. This requires both a residency and a fellowship match. The Peds-PEM route aims to train pediatricians first and then focus them into pediatric emergency physicians through a large amount of time spent in the pediatric ED with a targeted exposure to adult EM as required by the ACGME. There is a heavy focus on dedicated research time compared to the other pathways. Peds-PEM graduates are eligible for the PEM sub-boards co-sponsored by ABEM and ABP. The clinical scope of Peds-PEM is limited to patients <21 years of age, making children’s hospitals or EDs with a high pediatric volume the ideal career for these graduates. However, these age limits are noted to be arbitrary and 21 years is not a firm limit. Still, much of adult EM will be outside the scope of training and hospital privileges afforded to Peds-PEM graduates. Peds-PEM duration of training is six years without variation as described by the ACGME program requirements for PEM fellowships. The EM-PEM pathway offers the most overall training in EM with a foundation of emergency and critical care in the primary residency followed by specialty training in pediatric emergency care during the fellowship. This requires both a residency and a fellowship match. The EM-PEM route aims to train emergency physicians first and then to focus them into pediatric emergency physicians through dedicated pediatric ED time and some subspecialty pediatrics. EM-PEM graduates are eligible for the PEM sub-boards co-sponsored by the ABP and the ABEM. The clinical scope of EM-PEM is all ages, although they are limited to an ED practice setting without the potential for pediatric ambulatory or inpatient medicine. EM-PEM duration of training is between five and six years. This is variable as students may choose to pursue a three- or four-year EM primary residency prior to their two-year fellowship. The EM&Peds pathway offers the broadest training of all pathways with complete training in both general pediatrics and emergency medicine. This requires only a residency match. The EM&Peds route aims to simultaneously train emergency physicians and general pediatricians, resulting in pediatric emergency physicians. Alhough this pathway has the least amount of pediatric-only ED EUs, the philosophy is to learn procedural skills and acute care principles through EM residency training and complete care of the pediatric patient through ambulatory and inpatient pediatric rotations. The result is not only an exposure to all aspects of pediatric and EM care but also a heavier focus on critical care compared to the other pathways. EM&Peds graduates are eligible for dual board certification in both EM and general pediatrics but have not been eligible for the PEM sub-board certification since 1998. Some centers, predominately freestanding children’s hospitals that care only for children, consider PEM sub-board certification a prerequisite which may be a limitation for EM&Peds graduates. However, the versatility of EM&Peds training may be a strength to centers that care for both adults and children. Many EM&Peds graduates work in academic, community, or rural centers. EM&Peds graduates are trained to care for children in ED, ambulatory and inpatient settings. EM&Peds training duration is five years as set by the joint ABP and ABEM agreement. The EM&Peds physician and the Peds-PEM physician both share the primary pediatric board, allowing eligibility for additional ABP-sponsored fellowship training or shared time as a clinical pediatrician or pediatric hospitalist in addition to their EM practice. Similarly, the EM&Peds physicians and EM-PEM physician share the primary EM board allowing for EM fellowship training potential. Certainly, all of the pathways in pediatric EM provide a background for such physicians to take positions of advocacy and leadership in clinical and academic settings. There are notable differences in the number of physicians trained through each pathway. A 2006 pediatric study referencing the Future of Pediatric Education II data revealed that at the time there were approximately 1300 ABP-certified Peds-PEM practitioners compared to only 170 ABEM-certified EM-PEM practitioners, a proportion that has likely continued to shift to less representation by EM-PEM physicians. In 2007, Murray et al. also showed through a survey of PEM fellowship programs that only 5% of entering fellows had an EM primary board background. More recently, 2018 ABEM data reveals that in 2017, only 40 ABEM-eligible EM-PEM physicians were enrolled in PEM fellowship programs, suggesting that only ~ 20 EM residency graduates enter PEM fellowships annually. The reasons for this are not clear, although recently the ABEM EM to PEM taskforce has sought to address this difference. One possibility is that EM graduates do not seek to be further specialized as acute care of children is already within their scope of practice. Centralization of pediatric emergency care may also lead to fewer opportunities for EM-PEM graduates in community EDs, where PEM fellowship training would not necessarily be more advantageous than general EM training alone. During our research we noted that there were less EM-based PEM fellowships (29) compared to pediatrics-based PEM fellowships (64). Notably, several pediatrics-based programs that published a Peds-PEM curriculum did not publish an EM-PEM curriculum. Although possibly due to omission from their websites, PEM fellowship programs are not required to accept both pediatrics and EM candidates. This may indicate fewer available fellowship opportunities for the EM-PEM pathway compared to Peds-PEM or that EM-PEM trainees are required to complete three years at that fellowship instead of two. Lastly, financial differences may contribute to this issue. PEM-fellowship trained physicians traditionally have a lower salary than general emergency physicians. However, EM&Peds graduates do report making similar salaries to that of their general EM colleages. We speculate that EM-PEM graduates likely make similar salaries to EM and EM&Peds graduates given their capacity to care for adults. We also speculate that salary is more likely related to the practice setting than the training itself, although the training does in part help determine the practice setting. There were 48 postgraduate year 1–5 candidates enrolled in EM&Peds programs in the ABEM dataset, making it the second most common pathway chosen. Still the vast majority of pediatric emergency providers are Peds-PEM, making the EM&Peds pathway less well known by comparison. More research might reveal more subtle differences between the specifics of these training pathways as regards specific procedural experience, patient volume, or other metrics. What may be considered a limitation to one student may be a strength to another. An appropriate starting point may be whether the student wishes to care for adults or only children. Should students not wish to care for adults, a Peds-PEM pathway would be most suitable. If students would like to care for adults, the applicant would be directed toward either EM-PEM or EM&Peds. The difference here is eligibility for the PEM sub-boards and general pediatrics exposure for the EM&Peds graduate. EM-PEM graduates are eligible for sub-board certification, which may increase the likelihood for employment in some children’s hospitals or other centers that require subspecialty certification. EM&Peds graduates are no longer eligible for sub-board certification, which can be a deterrent to certain centers. However, EM&Peds graduates have a much broader scope of practice with the potential for more varied career paths including ambulatory and hospital pediatrics to which EM-PEM graduates do not have access. This may be attractive to centers looking to employ a provider in several clinical areas or departments. Students can certainly blaze their own trail within a given pathway but should be aware and well prepared for the path ahead of them. Data collected is limited to only those programs with a published online curriculum. By not polling programs directly, this does give an incomplete picture and may have failed to recognize more recent developments in certain programs. However, the authors felt that this approach was similar to that of a medical student researching future career options and was thus appropriately realistic with a relatively large sample size. Confirmation and clarification from programs would increase the overall accuracy of the available data by ensuring only the most recent/updated curriculum was used, and would add more data points by including programs without a publicly published curriculum. Longitudinal experience was not accounted for in the dataset as only dedicated EUs were included. Similarly, many training programs also incorporate clinical shifts into elective or research time. However, dedicated EUs are what is mandated by the ACGME as well as by the ABP and ABEM for board certification and thus are a better marker of the overall goals of training programs. The exact number of dedicated EUs that Peds-PEM and EM-PEM residents do during residency was estimated. However, all graduates from accredited pediatrics or EM primary residencies are eligible for the PEM fellowship, thus making the minimum number required a reasonable estimation. To our knowledge, a central resource with this depth of analysis and information was not previously available to medical students considering their career choices. Three training pathways lead to expertise in pediatric emergency medicine although with different career trajectories. Peds-PEM training is ideal for the student who does not wish to care for adults, although clinical career options may be limited to children’s hospitals or EDs with a high enough pediatric volume to sustain the narrower scope of practice. EM-PEM and EM&Peds pathways are similar, although the lack of sub-board eligibility for EM&Peds may be a limitation for clinical careers in centers that require the sub-board certification. The curriculum of each pathway can be used to guide students toward their ultimate desired career. Understanding the characteristics of current available paths will hopefully set students up for success in future careers focused on the care of children in the emergency setting.
Improving the paediatric surgery patient experience: an 8-year analysis of narrative quality data
5b5c76dd-6678-490d-b703-0a7057bafc0c
7223344
Pediatrics[mh]
Patient experience and patient outcomes play a fundamental role in the quality of healthcare systems. Quantitative and qualitative data about patient experience can help healthcare professionals and administrators better understand the needs of patients and their families as well as provide a foundation for improvement of procedures, processes and services. However, units often lack a methodological framework to analyse these data empirically and derive key areas for improvement. The American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) is aimed at improving the quality of surgical care by collecting patient data and reporting risk-adjusted surgical outcomes for each participant hospital in the programme. Through medical records and communications with patients, ACS NSQIP collects patient information, such as patient demographics, preoperative risk factors, operative information, and perioperative and postoperative outcomes within a defined time period after surgery. This information is used to calculate the expected number of negative outcomes for each institution and this number is compared with the observed number of negative outcomes to pinpoint the areas that need improvement. Though qualitative data about patient experience are captured as part of the NSQIP database, to date no framework has been proposed, or reported on, to analyse these data for the purposes of quality improvement (QI). Here we address this critical gap by hypothesising that content analysis is a feasible methodology for this purpose and test our hypothesis using a sample of over 3600 qualitative comments from patients and families. Patient experience in surgery Patient experience in medicine is an important driver of healthcare quality. In surgery, including paediatric surgery, most of the research explores the relationship between patient satisfaction and patient outcomes, such as readmission, morbidity and mortality rates. Many studies collect data using patient satisfaction surveys, with the Hospital Consumer Assessment of Healthcare Providers and Systems survey being the most commonly used patient experience reporting tool. Other studies use preoperative and/or postoperative questionnaires to measure patient experience and outcome, as many have found that high patient satisfaction is linked to lower 30-day mortality, morbidity and readmission rates. However, other studies have shown a non-linear relationship between patient satisfaction and patient outcomes. As such, harnessing a methodological framework to evaluate patient experience independently of outcomes can assist in establishing quality baselines and in identifying areas for improvement that are more remotely linked to morbidity or mortality outcomes. The goal of this study was to test the feasibility of using content analysis to conduct a rigorous descriptive quantitative analysis of narrative data to further our understanding of the paediatric surgery experience and to identify key strengths and opportunities for improvement. Patient experience in medicine is an important driver of healthcare quality. In surgery, including paediatric surgery, most of the research explores the relationship between patient satisfaction and patient outcomes, such as readmission, morbidity and mortality rates. Many studies collect data using patient satisfaction surveys, with the Hospital Consumer Assessment of Healthcare Providers and Systems survey being the most commonly used patient experience reporting tool. Other studies use preoperative and/or postoperative questionnaires to measure patient experience and outcome, as many have found that high patient satisfaction is linked to lower 30-day mortality, morbidity and readmission rates. However, other studies have shown a non-linear relationship between patient satisfaction and patient outcomes. As such, harnessing a methodological framework to evaluate patient experience independently of outcomes can assist in establishing quality baselines and in identifying areas for improvement that are more remotely linked to morbidity or mortality outcomes. The goal of this study was to test the feasibility of using content analysis to conduct a rigorous descriptive quantitative analysis of narrative data to further our understanding of the paediatric surgery experience and to identify key strengths and opportunities for improvement. Study context and participants British Columbia Children’s Hospital (BCCH) has participated in ACS NSQIP since 2011. The quantitative clinical data collected have been used to compare morbidity and mortality rates with other paediatric hospitals, and identify targets for QI work. Using the rates of adverse outcomes as a guide, the ACS NSQIP team has created QI projects to target the most common adverse outcomes for reduction. In addition to collecting quantitative patient data, patient families are given the opportunity to provide feedback on their experience at BCCH at 30 days after surgery. Since 2011, a total of 3601 narrative reports on patient and family experience have been aggregated. Design and coding Taking the feedback from patient families, we conducted a content analysis of the narratives that have been received since 2011. We used qualitative description to characterise the experiences of families. A total of 3601 narrative patient experience comments were retrieved from the ACS NSQIP database and the unit of analysis was selected to be each single comment independent of its length. Comments that did not pertain specifically to patient experience (n=111) were excluded from the analysis. The full sample was randomised and an initial 10% of the comments were retrieved for a pilot analysis in order to develop a coding guide to represent the broad themes found using an emergent coding strategy. This coding guide was first refined through discussion between researchers (JMR (F), SCB (F), MTT (F)). To ensure that the coding guide reliably captured all the thematic categories within the data, two coders (SCB, MTT) applied it to a second subsample of 10% (n=360) of the comments. Any discrepancies or disagreements were discussed and resolved through consensus, and the coding guide was refined using an iterative process using additional subsamples of 5% of the data. Following this pilot coding phase, inter-rater reliability reached 93% between the two coders using the coding guide. The remainder of the sample was coded by the primary coder (SCB) using Microsoft Excel. The final coding guide comprised the following major themes that reflect the types of issues and feedback contributed by patients and families: (1) overall satisfaction with patient experience; (2) health outcomes (eg, pain management, healing, complications); (3) experiences with hospital staff (eg, overall staff, doctors, nurses, trainees); (4) communication (eg, from doctors, between doctors/departments); (5) timelines (eg, wait times); and (6) facilities. Patient and public involvement Patients and their families contributed the narrative reports analysed in the present study. The research team consulted with patient representatives when interpreting the results. In response to the findings, the research team is planning a series of workshops with patients, families and hospital staff to improve pain management communication. Patients will contribute to the dissemination strategies by determining the results they deem most important to share with hospital staff in order to encourage improvements of the overall quality of care. British Columbia Children’s Hospital (BCCH) has participated in ACS NSQIP since 2011. The quantitative clinical data collected have been used to compare morbidity and mortality rates with other paediatric hospitals, and identify targets for QI work. Using the rates of adverse outcomes as a guide, the ACS NSQIP team has created QI projects to target the most common adverse outcomes for reduction. In addition to collecting quantitative patient data, patient families are given the opportunity to provide feedback on their experience at BCCH at 30 days after surgery. Since 2011, a total of 3601 narrative reports on patient and family experience have been aggregated. Taking the feedback from patient families, we conducted a content analysis of the narratives that have been received since 2011. We used qualitative description to characterise the experiences of families. A total of 3601 narrative patient experience comments were retrieved from the ACS NSQIP database and the unit of analysis was selected to be each single comment independent of its length. Comments that did not pertain specifically to patient experience (n=111) were excluded from the analysis. The full sample was randomised and an initial 10% of the comments were retrieved for a pilot analysis in order to develop a coding guide to represent the broad themes found using an emergent coding strategy. This coding guide was first refined through discussion between researchers (JMR (F), SCB (F), MTT (F)). To ensure that the coding guide reliably captured all the thematic categories within the data, two coders (SCB, MTT) applied it to a second subsample of 10% (n=360) of the comments. Any discrepancies or disagreements were discussed and resolved through consensus, and the coding guide was refined using an iterative process using additional subsamples of 5% of the data. Following this pilot coding phase, inter-rater reliability reached 93% between the two coders using the coding guide. The remainder of the sample was coded by the primary coder (SCB) using Microsoft Excel. The final coding guide comprised the following major themes that reflect the types of issues and feedback contributed by patients and families: (1) overall satisfaction with patient experience; (2) health outcomes (eg, pain management, healing, complications); (3) experiences with hospital staff (eg, overall staff, doctors, nurses, trainees); (4) communication (eg, from doctors, between doctors/departments); (5) timelines (eg, wait times); and (6) facilities. Patients and their families contributed the narrative reports analysed in the present study. The research team consulted with patient representatives when interpreting the results. In response to the findings, the research team is planning a series of workshops with patients, families and hospital staff to improve pain management communication. Patients will contribute to the dissemination strategies by determining the results they deem most important to share with hospital staff in order to encourage improvements of the overall quality of care. Through content analysis, the large quantity of patient narrative data was reduced to quantitative measures related to the key thematic categories. To demonstrate the feasibility of using content analysis to rigorously capture key areas for improvement in a set of narratives and to illustrate the nature of a data set resulting from content analysis, here we provide descriptive data from our sample organised by major theme, supported by relevant quotes. Henceforth, for each thematic category, numbers are reported as (percentage, numerator/denominator for the subsample of the thematic category). For example, of the total comments that relate to a certain subsample of a thematic category (eg, comments on pain management), a percentage of those comments indicate a specific experience within that subsample (eg, good pain management or poor pain management). Overall experiences A majority of comments (77%, 2778/3601) mentioned overall satisfaction with the experiences of families and care their child received (see for a detailed breakdown of the comments regarding overall satisfaction). Of these comments expressing an overall level of satisfaction (n=2778), most indicated that families were pleased with the care they experienced, grateful to the hospital, and expressed comments that were characterised as very satisfied (56%, 1543/2778) or satisfied (26%, 731/2778) by the research team. Examples that illustrate satisfaction with patient experience include: We’re glad we went to BC Children’s Hospital. Couldn’t have been any better. Our daughter was treated with the utmost care, respect and love! Thank you for the amazing work you all do! Health outcomes A common topic of feedback was regarding the physical health outcomes of children after surgery (26%, 921/3601). General mentions of health outcomes were mostly positive (good=67%, 619/921; bad=33%, 302/921) (see for a more detailed outline of comments regarding health outcomes). A majority of families who reported on whether their children had improved following surgery indicated an improvement (92%, 262/285). With regard to wound healing specifically, a majority of the comments were positive stating that they were healing well (82%, 103/126). However, families had a negative feedback related to complications after surgery (52%, 165/315) and pain management (93%, 71/76) . Even though parents were happy with some parts of their experience, they were mostly worried about their child’s pain: Satisfied w/ Dr’s. Not happy with discharge after 2 days. Had to return to hospital. Should have kept her in longer to keep close eye on her. She was still in pain. I think they should keep the patient in for a few days longer. Experiences with hospital staff A proportion of the total comments (57%, 2040/3601) included a mention of hospital staff, healthcare professionals or others involved in the child’s care . The majority (91%, 1865/2040) of these comments included positive feedback about the personnel, and about nurses and doctors in particular, families encountered during their hospital visit: Every single member of the surgical team (attending, fellows, residents, nurses etc.) gave fantastic family and patient centered care to my son. There seems to be amazing communication amongst staff in your department and fantastic modelling of great care to the more junior staff. Thank you. P.S. There actually isn’t one thing that I can think of that would have made my son's surgery or care better or safer. :) However, the balance between positive and negative comments was not equal for each type of healthcare professional or staff. For example, in comparison to the comments on other medical staff, an increased number of negative comments were reported for experiences with medical residents (positive=55%, 12/22; neutral=9%, 2/22; negative=36%, 8/22) and clerical staff (positive=62%, 16/26; neutral=8%, 2/26; negative=31%, 8/26). Some parents felt as though residents needed more supervision or training in order to care for their child and that some clerical staff were not helpful. Comments were also coded for the types of qualities the families observed in staff. The most commonly mentioned qualities stated that they were helpful (20%, 228/1162), friendly (14%, 167/1162) and supportive (12%, 140/1162). Negative traits were mentioned in a smaller proportion of comments, for example, not helpful (2%, 18/1162), not efficient (2%, 26/1162). In addition to these qualities, some parents also raised comments about whether they felt that the staff listened to their concerns, with most feeling that they did not listen (61%, 56/92). For those who mentioned their experiences with the bedside manner of doctors and nurses, 67% felt that doctors and nurses had good bedside manner (16/24) and 33% had bad bedside manner (8/24), as illustrated in the following example: The nurses in the recovery room need to have better bedside manners! To capture a more specific feature of the patient and family/personnel interactions, a code was generated for comments giving a sense of ‘dignity and respect’ or the lack thereof. This represented those experiences of parents who felt that they were treated respectfully by the people they encountered at the hospital: […] not rushing made us feel like we were human, not just part of an assembly line. Communication Families had different experiences with the types of information provided to them during their stay at the hospital regarding surgical processes, information about home care after discharge and information about the healing trajectory after discharge . A majority (74%, 158/214) of comments indicated that families felt that they were provided with adequate information for them to feel comfortable at the hospital, and some felt that there were communication issues between doctors, departments and other staff which affected the care they received: [There was a] hiccup, miscommunication, the right people didn’t know he was scheduled for surgery. Gtube nurse didn’t know he had surgery, so we had to wait to see her, didn’t end up seeing her. Nurse had to sign forms for supplies, but the nurse didn’t have [Patient] on list. So no supplies, and didn’t actually get them for 1.5 wks. A common discharge issue identified in the comments was that families did not feel they received enough information that prepared them for caring for their child at home after surgery (79%, 67/85) as well as not given adequate details about what would happen during the healing process (89%, 59/66): We were discharged with no specific follow-up instructions, unless we were concerned or saw infection or other complication. [Patient] was doing very well post-op but I just wanted reassurance by having our GP look at incision. Another example highlights the emotional aspect of the lack of communication after discharge: When getting discharged we did not get any instructions regarding possibility of post-surgical seizures. [He] did not experience a seizure until 5 days after surgery; the night we went home to [City] 2 days prior to that he was taken off the steroids used to control the swelling. I wonder if this was why he had seizures & why we were not warned this could happen. We had no knowledge of what to do we were unprepared & frightened. Timelines A substantial number of responses about surgery were related to the timelines that families experienced. Some parents reported long wait times in the clinic (35%, 11/31), while comments referring to waiting for surgery (long wait times=79%, 92/117; short wait times=21%, 25/117) were coded when families expressed that they waited a long time for the surgery to be scheduled or when there were long wait times during the day of surgery. Families experienced delays in imaging (eg, MRI, X-ray) before surgery (4%, 6/167) and a few (2%, 3/167) were given less than 24 hours’ notice for surgery which was difficult for those with long travel distances to the hospital. In addition, some parents (20%, 53/269) felt that their children were discharged too soon after surgery: …discharged you rushed them out, he couldn’t get up, rushing him out. Threw up in car and elevator. Keep them in for longer. Another example demonstrates the feelings that families have when they are discharged too soon from the hospital: Our daughter was discharged 12 hours after her emergency lap Appendectomy. This seems a bit too soon. Sure enough she developed a fever within 24 hours post discharge. I would think that patients should be observed for longer after operation of this magnitude. Facilities Families who had experiences with the clinic had issues with contact, as some reported that it was difficult to make an appointment (52%, 16/31) or that the clinic did not book a follow-up appointment with them (13%, 4/31). Safety concerns (9/3601) were mentioned when families felt that they or their child was unsafe due to the provided amenities or the events happening within the hospital: A very unusual thing happen. There was some kind of child custody issue on ward, I felt extremely unsafe, begged Dr. to let us go home, I felt unsafe. Individual was still allowed in. We should not have been around that, felt uncomfortable. Unsafe around that parent. Told Dr. how we felt. We went home early. Other little girl in room was so upset. A bit scary. A small proportion of comments (7%, 238/3601) included considerations related to the physical and administrative components of the actual facility. Examples of comments about facilities included issues with the shared rooms being too loud and disruptive for the families or their child, and having to request to move to a single room. Other issues were raised about parking such technical difficulties with the parking metre, stress about topping up the parking metre and the excessive parking costs during their visit and stay. A majority of comments (77%, 2778/3601) mentioned overall satisfaction with the experiences of families and care their child received (see for a detailed breakdown of the comments regarding overall satisfaction). Of these comments expressing an overall level of satisfaction (n=2778), most indicated that families were pleased with the care they experienced, grateful to the hospital, and expressed comments that were characterised as very satisfied (56%, 1543/2778) or satisfied (26%, 731/2778) by the research team. Examples that illustrate satisfaction with patient experience include: We’re glad we went to BC Children’s Hospital. Couldn’t have been any better. Our daughter was treated with the utmost care, respect and love! Thank you for the amazing work you all do! A common topic of feedback was regarding the physical health outcomes of children after surgery (26%, 921/3601). General mentions of health outcomes were mostly positive (good=67%, 619/921; bad=33%, 302/921) (see for a more detailed outline of comments regarding health outcomes). A majority of families who reported on whether their children had improved following surgery indicated an improvement (92%, 262/285). With regard to wound healing specifically, a majority of the comments were positive stating that they were healing well (82%, 103/126). However, families had a negative feedback related to complications after surgery (52%, 165/315) and pain management (93%, 71/76) . Even though parents were happy with some parts of their experience, they were mostly worried about their child’s pain: Satisfied w/ Dr’s. Not happy with discharge after 2 days. Had to return to hospital. Should have kept her in longer to keep close eye on her. She was still in pain. I think they should keep the patient in for a few days longer. A proportion of the total comments (57%, 2040/3601) included a mention of hospital staff, healthcare professionals or others involved in the child’s care . The majority (91%, 1865/2040) of these comments included positive feedback about the personnel, and about nurses and doctors in particular, families encountered during their hospital visit: Every single member of the surgical team (attending, fellows, residents, nurses etc.) gave fantastic family and patient centered care to my son. There seems to be amazing communication amongst staff in your department and fantastic modelling of great care to the more junior staff. Thank you. P.S. There actually isn’t one thing that I can think of that would have made my son's surgery or care better or safer. :) However, the balance between positive and negative comments was not equal for each type of healthcare professional or staff. For example, in comparison to the comments on other medical staff, an increased number of negative comments were reported for experiences with medical residents (positive=55%, 12/22; neutral=9%, 2/22; negative=36%, 8/22) and clerical staff (positive=62%, 16/26; neutral=8%, 2/26; negative=31%, 8/26). Some parents felt as though residents needed more supervision or training in order to care for their child and that some clerical staff were not helpful. Comments were also coded for the types of qualities the families observed in staff. The most commonly mentioned qualities stated that they were helpful (20%, 228/1162), friendly (14%, 167/1162) and supportive (12%, 140/1162). Negative traits were mentioned in a smaller proportion of comments, for example, not helpful (2%, 18/1162), not efficient (2%, 26/1162). In addition to these qualities, some parents also raised comments about whether they felt that the staff listened to their concerns, with most feeling that they did not listen (61%, 56/92). For those who mentioned their experiences with the bedside manner of doctors and nurses, 67% felt that doctors and nurses had good bedside manner (16/24) and 33% had bad bedside manner (8/24), as illustrated in the following example: The nurses in the recovery room need to have better bedside manners! To capture a more specific feature of the patient and family/personnel interactions, a code was generated for comments giving a sense of ‘dignity and respect’ or the lack thereof. This represented those experiences of parents who felt that they were treated respectfully by the people they encountered at the hospital: […] not rushing made us feel like we were human, not just part of an assembly line. Families had different experiences with the types of information provided to them during their stay at the hospital regarding surgical processes, information about home care after discharge and information about the healing trajectory after discharge . A majority (74%, 158/214) of comments indicated that families felt that they were provided with adequate information for them to feel comfortable at the hospital, and some felt that there were communication issues between doctors, departments and other staff which affected the care they received: [There was a] hiccup, miscommunication, the right people didn’t know he was scheduled for surgery. Gtube nurse didn’t know he had surgery, so we had to wait to see her, didn’t end up seeing her. Nurse had to sign forms for supplies, but the nurse didn’t have [Patient] on list. So no supplies, and didn’t actually get them for 1.5 wks. A common discharge issue identified in the comments was that families did not feel they received enough information that prepared them for caring for their child at home after surgery (79%, 67/85) as well as not given adequate details about what would happen during the healing process (89%, 59/66): We were discharged with no specific follow-up instructions, unless we were concerned or saw infection or other complication. [Patient] was doing very well post-op but I just wanted reassurance by having our GP look at incision. Another example highlights the emotional aspect of the lack of communication after discharge: When getting discharged we did not get any instructions regarding possibility of post-surgical seizures. [He] did not experience a seizure until 5 days after surgery; the night we went home to [City] 2 days prior to that he was taken off the steroids used to control the swelling. I wonder if this was why he had seizures & why we were not warned this could happen. We had no knowledge of what to do we were unprepared & frightened. A substantial number of responses about surgery were related to the timelines that families experienced. Some parents reported long wait times in the clinic (35%, 11/31), while comments referring to waiting for surgery (long wait times=79%, 92/117; short wait times=21%, 25/117) were coded when families expressed that they waited a long time for the surgery to be scheduled or when there were long wait times during the day of surgery. Families experienced delays in imaging (eg, MRI, X-ray) before surgery (4%, 6/167) and a few (2%, 3/167) were given less than 24 hours’ notice for surgery which was difficult for those with long travel distances to the hospital. In addition, some parents (20%, 53/269) felt that their children were discharged too soon after surgery: …discharged you rushed them out, he couldn’t get up, rushing him out. Threw up in car and elevator. Keep them in for longer. Another example demonstrates the feelings that families have when they are discharged too soon from the hospital: Our daughter was discharged 12 hours after her emergency lap Appendectomy. This seems a bit too soon. Sure enough she developed a fever within 24 hours post discharge. I would think that patients should be observed for longer after operation of this magnitude. Families who had experiences with the clinic had issues with contact, as some reported that it was difficult to make an appointment (52%, 16/31) or that the clinic did not book a follow-up appointment with them (13%, 4/31). Safety concerns (9/3601) were mentioned when families felt that they or their child was unsafe due to the provided amenities or the events happening within the hospital: A very unusual thing happen. There was some kind of child custody issue on ward, I felt extremely unsafe, begged Dr. to let us go home, I felt unsafe. Individual was still allowed in. We should not have been around that, felt uncomfortable. Unsafe around that parent. Told Dr. how we felt. We went home early. Other little girl in room was so upset. A bit scary. A small proportion of comments (7%, 238/3601) included considerations related to the physical and administrative components of the actual facility. Examples of comments about facilities included issues with the shared rooms being too loud and disruptive for the families or their child, and having to request to move to a single room. Other issues were raised about parking such technical difficulties with the parking metre, stress about topping up the parking metre and the excessive parking costs during their visit and stay. The use of content analysis to analyse healthcare and patient experience data has become increasingly popular over the last decade. While this method has been traditionally used in the context of interview studies or media analyses, content analysis is now being applied to large data sets, including social media, and clinical databases such as in the present analysis. Results from the 8-year analysis of patient experience narratives demonstrate the potential for empirical content analyses of databases such as ACS NSQIP to successfully uncover key strengths and areas for QI in the paediatric surgery setting. Methods in paediatric patient experience research usually involve distributing surveys to the parents of patients to better understand their experience, with some studies also involving the paediatric patients directly. Findings from these active participation studies align with those of our database analysis. For example, survey results revealed that parents reported more positive experiences when information about the surgery was provided ahead of the intervention, and when families encountered staff who were not in a rush to provide care. In addition, higher patient and family satisfaction was also linked to the involvement of the child patient in conversations about his/her health and treatment. These findings are supported by the emphasis on quality communication uncovered in the NSQIP data. With the increasing interest in using patient experience to improve the quality of healthcare, the body of literature assessing the impact of using patient experience data as a driver of QI initiatives is growing. Indovina et al used patient experience data to inform a randomised, QI initiative to tackle low patient satisfaction in physician communication. As a result of this intervention, satisfaction with communication from doctors, patient perception of courtesy and respect from physicians, and satisfaction with physicians listening to patients increased by 21%, 53% and 38%, respectively. Similarly, a Pittsburgh hospital created a QI project to increase patient involvement and positive experiences at discharge following adult orthopaedic spine surgery. Patient satisfaction scores increased when patients and families were involved in goal setting and provided education videos during the discharge planning process. In the paediatric setting, Soeteman et al described using surveys before and after introducing a programme to reduce paediatric ambulatory clinic wait times. Feedback from patients provided the opportunity to evaluate current operational processes and further improve the quality of services. The methodology described in the present study and findings from our analysis offers a foundational process to capture the authentic experiences of families and patients and use these data to proactively address quality and satisfaction of care. Some of the challenges in patient experience data analysis include recall bias, limited generalisability between populations, response rate and difficulty in translating the data into meaningful care improvements. While the analysis in this study provides new applications of analytical methods and detailed insight to the experiences in paediatric surgery, we acknowledge the limitations of the approach. First, families are more likely to comment about a certain aspect of their experience, such as pain management, if it was a negative experience rather than a positive one which creates bias in our findings. Second, there may have been a selection bias as families were contacted via phone for their comments only if they did not have a 30-day follow-up. For complex cases, a letter was sent to complete a survey along with any additional comments they wanted to share. Although these comments were also entered in the NSQIP database, the different data collection modalities may interfere with data quality. Third, all comments were given in English and therefore do not capture the perspectives of patients and families who do not speak or write in English. Fourth, feedback regarding the hospital facilities (eg, rooms) may not be relevant as the building was upgraded during the data collection period. Finally, patient narratives are not a validated metric in ACS NSQIP, therefore the present study focuses on the importance of its inclusion. Future research will explore linkages between patient narratives and surgical outcomes, and the effectiveness of internal and patient-facing dissemination strategies for the data obtained through the present analysis. Databases such as ACS NSQIP play a critical role in improving the quality of surgical care in paediatric populations. The inclusion of patient experience as an outcome of the surgical encounter enriches the data available for analysis and provides new opportunities for improving care to patients and families along the surgical care pathway.
Becker muscular dystrophy mice showed site-specific decay of type IIa fibers with capillary change in skeletal muscle
6fe780e1-d82b-4f6e-a57c-1233f275363a
11913446
Musculoskeletal System[mh]
Becker muscular dystrophy (BMD), an X-linked muscle disorder, is characterized by progressive muscle wasting and weakness, mostly caused by an in-frame variant in DMD encoding the sarcolemmal protein dystrophin. In BMD muscle tissues, truncated and reduced dystrophin is expressed; therefore, its clinical status is generally milder than that of Duchenne muscular dystrophy (DMD), an allelic disorder with complete loss of dystrophin ( ). BMD is clinically heterogeneous, with some affected individuals experiencing a near-normal lifestyle and lifespan, while others lose the ability to walk in their late teens or early 20s ( ; ; ). A natural history study of patients with BMD revealed that part of the skeletal muscle phenotype may be associated with the genotype of DMD and that there are several patterns of exon deletions associated with severe or milder phenotypes ( ). In-frame exon deletions of BMD accumulate in exons 45–55 (first hotspot) or exons 3–7 (second hotspot) of DMD . Particularly, 80% of all in-frame deletions were included in the first hotspot: exons 45–55. Among these, the deletion of exons 45–47 (d45–47) is the most frequent, approximately 30% of patients with BMD have this in-frame deletion. Deletion of exons 45–48 (d45–48) is the second most frequent; approximately 18% of patients with BMD have this deletion and a milder phenotype than D45–47. In contrast, the deletion of exons 45–49 (d45–49) has a more severe phenotype than D45–47 ( ). The relationship between the pathomechanisms of disease severity and each exon deletion remains unclear. In the muscle tissue of patients with BMD, truncated dystrophin expression levels are lower than those of healthy controls, and dystrophin immunohistochemistry shows a ‘faint and patchy’ staining pattern ( ; ). The correlation between the severity and expression levels of truncated dystrophin is still conflicting, and the truncated dystrophin expression levels in muscle tissues are well correlated with the clinical severity of BMD ( ). However, there is another report that truncated dystrophin levels do not appear to be a major determinant of disease severity in BMD ( ). Dystrophin is a large filamentous protein with a molecular weight of 427 kDa that protects the sarcolemma from mechanical stress during muscle contraction. Dystrophin connects to cytoskeletal protein families that can assemble into macromolecular structures with a large number of proteins and lipids ( ). Recently, a report suggested that the BMD clinical heterogeneity is associated with the changes in dystrophin structure generated by exon deletion using an in silico prediction model ( ). They expect that the d45–48 and d45–51 displayed a structure similar to that of wild-type (WT) dystrophin (hybrid repeat), whereas the d45–47 and d45–49 lead to proteins with an unrelated structure (fractional repeat) among the representative exon deletions in the BMD variant hotspot. Exon deletions associated with fractional repeats are expected to lead to a more severe BMD phenotype than those associated with hybrid repeats; however, the influence of changes in dystrophin structure on clinical BMD severity is not fully understood. Recently, two animal models of BMD have been developed. One group generated the first BMD rat model carrying a deletion of exons 3–16 of the rat Dmd using CRISPR/Cas9 ( ). This BMD rat model exhibited muscle degeneration, muscle fibrosis, heart fibrosis, and reduced truncated dystrophin levels. Another group generated a BMD mouse model carrying a deletion in exons 45–47 of the mouse Dmd ( ). The BMD mice show skeletal muscle weakness, heart dysfunction, increased myofiber size variability, increased centronuclear fibers, and reduced truncated dystrophin. However, underlying mechanism of BMD severity remains unclear, and comparative investigations using several BMD animal models carrying exon deletions associated with various severities are necessary. To clarify the issue, we generated three types of BMD mice with representative exon deletions in the BMD hotspots: d45–47, d45–48, and d45–49 using CRISPR–Cas9 genome editing and compared the phenotypes and histopathological changes. Establishment of BMD mice carrying d45–48, d45–47, and d45–49 in Dmd To create mouse strains carrying three in-frame deletions: d45–48, d45–47, and d45–49, we used CRISPR/Cas9 genome editing technique to introduce approximately 160,000 bp genomic deletions into the endogenous murine Dmd . Guide RNAs (gRNAs) were designed to target protospacer adjacent motif sequences upstream of Dmd exon 45 and downstream of Dmd exons 47, 48, and 49. We confirmed the genomic deletion of dystrophin exons 45–47, 45–48, and 45–49 by cDNA sequencing ( ). By analyzing the deletion breakpoint, we confirmed that double-strand breaks were formed inside the sequences corresponding to gRNAs in d45–48, d45–47, and d45–49 mice ( ). We performed phenotypic testing of WT, d45–48, d45–47, d45–49, and mdx mice at 1, 3, 6, and 12 months of age. d45–49 and mdx mice had large body weights at 1 and 3 months of age, but the d45–48 and d45–47 mice did not differ from that in WT mice ( ). d45–49 mice showed muscle weakness at an earlier age compared to d45–48 and d45–47 mice To assess the muscle strength of the BMD mice, we conducted forelimb grip tests at 1, 3, 6, and 12 months of age, and the hanging wire tests at 3 months of age, and compared with WT and mdx mice. At the age of 1 month, d45–47, d45–49, and mdx mice showed reduced forelimb grip strength compared to WT and d45–48 mice. However, the weakness of d45–47 mice was milder compared to d45–49, and mdx mice of the same age were used. At the age of 3 months, reduced grip strength in the d45–47 mice was remarkable, and d45–48 mice exhibited a slight reduction in forelimb grip strength. Finally, after 6 months, all three BMD mice reached the same levels of reduced forelimb grip strength, although not as much as that of mdx mice ( ). The hanging wire test showed reduced suspension times in the d45–49 and mdx mice, although the WT, d45–48, and d45–47 mice exceeded the 600 s time limit ( ). Furthermore, the observation using video for the hanging wire test revealed changes in the mice moving during wire grasping: WT, d45–48, and d45–47 mice could quickly bring their posterior legs up toward their faces and catch the wire; in contrast, d45–49 and mdx mice could not raise their posterior legs above their chest ( ). d45–49 mice showed earlier muscle degeneration and fibrosis compared with d45 –48 and d45–47 mice Hematoxylin–eosin (HE) staining of the tibialis anterior (TA) muscles showed muscle degeneration and regeneration (i.e., opaque fibers, necrotic fibers, and centronuclear fibers) in all BMD and mdx mice, except for WT mice ( ). Opaque fibers, which are thought to be precursors of necrotic fibers, increased at an earlier age of 1 month in d45–49 mice compared with WT mice; in contrast, the proportion of opaque fibers differs significantly between d45–47 and WT mice at 3 months, with an increased tendency only in 1-month-old mice ( ). Necrotic fibers were seen only in mdx mice at 1 month, but were also increased in d45–49 mice at 3 months ( ). Centronuclear fibers, which are thought to be regenerated fibers after muscle damage, were increased in mdx mice and slightly increased in all BMD mice at 1 month. At 3 months, centronuclear fibers were increased in the d45–49 mice ( ). In addition, Sirius Red staining of the TA muscles revealed an increase in fibrosis in all BMD and mdx mice, except in WT mice ( ). At 1 month, an increased fibrotic area was seen only in mdx mice, but after 3 months, it was seen also in d45–49 mice. After 6 months, fibrosis gradually became apparent, even in d45–48 and d45–47 mice ( ). HE and Sirius Red staining revealed increased centronuclear fibers and fibrosis in the diaphragm muscles, and increased myocardial fibrosis in the cardiac muscles in all BMD mice at 12 months ( ). However, these changes were minor compared with those observed in mdx mice. All BMD mice exhibited muscle degeneration and fibrosis; however, these findings became more apparent with exon deletions: the time-course histopathological observations revealed that muscle degeneration and fibrosis deteriorated in the following order: d45–49, d45–47, and d45–48 mice ( ). All BMD mice showed a decrease in neuronal nitric oxide synthase expression and no utrophin overexpression We confirmed the immunohistochemical analysis in TA muscles of dystrophin and dystrophin–glycoprotein complex (DGC) proteins: alpha-sarcoglycan (aSG) and neuronal nitric oxide synthase (nNOS) at 1 and 3 months. Dystrophin immunohistochemistry demonstrated a reduction of dystrophin and ‘faint’ and ‘patchy’ staining patterns in all BMD mice. Consistent with the lack of an nNOS-binding site, encoded by exons 42–45, all BMD mice showed decreased levels of nNOS expression in the sarcolemma to levels similar to mdx mice. On the contrary, consistent with the remaining dystroglycan-binding site: encoded by exons 63–70, the expression of aSG was detected on the sarcolemma in all BMD mice, whereas the expression levels were slightly low compared with WT mice ( ). Western blot (WB) analysis demonstrated truncated dystrophin expression in all BMD mice at 1 and 3 months ( ). The truncated dystrophin expression levels were reduced to 30–40% at 1 month, and 10–20% at 3 months compared to those in WT mice ( ), despite adequate levels of dystrophin mRNA expression ( ). By WB analysis, the utrophin expression levels showed only an increased tendency in all BMD mice at 3 months, whereas there was a significant increase in mdx mice (8-fold at 1 month and 30-fold at 3 months) compared to WT mice ( ). In contrast, utrophin mRNA expression levels did not differ among WT, BMD, and mdx mice ( ). Further, WB analysis demonstrated decreased levels of nNOS in all BMD mice as well as mdx mice at 3 months ( ), despite adequate levels of nNOS mRNA expression ( ). BMD mice showed site-specific muscle degeneration and type IIa fiber reduction Generally, patients with muscle diseases, including BMD, show changes in MRI or CT images of a particular group of muscles ( ; ). This allows investigators to use muscle images as a diagnostic tool for diagnosis. Thus, we examined whether the histopathological changes are different in particular parts of the muscle in BMD mice at 3 months, by observing the inner or outer part of the TA, four parts of the quadriceps: vastus lateralis (VL), rectus femoris (RF), vastus medialis (VM), vastus intermedius (VI), and three parts of the erector spinae: multifidus (MF), longissimus (LG), and iliocostalis (IC). mdx mice showed diffuse muscle degeneration on HE staining and no remarkable histopathological differences between the inner and outer part of the TA. In contrast, BMD mice showed significant site-specific muscle degeneration in the inner part of the TA compared with the outer part ( , upper panel ). Even in the quadriceps and erector spinae, d45–49 mice showed site-specific muscle degeneration predominantly in the RF, VM, VI, MF, and LG, whereas less muscle degeneration was observed in the VL and IC. In contrast, mdx mice exhibited diffuse muscle generation in all parts of the quadriceps and erector spinae ( ). Serum creatine kinase (CK) levels at 1, 3, 6, and 12 months were high in mdx mice, and were two- to fourfold higher in all BMD mice after 3 months compared to WT mice, although the difference was not statistically significant ( ). We observed site-specific muscle degeneration in BMD mice, especially in the deep muscle or inner part of the same muscle. Next, we examined the muscle fiber composition in each muscle or part of the same muscle in WT, BMD, and mdx mice using MYH-1, -2, -4, and -7 immunohistochemistry, corresponding to type IIx, IIa, IIb, and I fibers, respectively. In WT mice at 3 months, type IIb fibers were dominant in the outer part of the TA, while type IIa and IIx fibers were dominant in the inner part of the TA, although type I fibers were rare in the TA ( , middle and lower panels ). These deviations in muscle fiber type in the TA were also seen in BMD and mdx mice at 3 months, however, the ratio of type IIa fibers in the inner part of the TA was decreased in all BMD and mdx mice compared with WT mice. In contrast, the ratio of type IIb to type IIx fibers was unchanged in all BMD and mdx mice compared with that in WT mice ( ). The deviation in muscle fiber type was seen even in the quadriceps and erector spinae in WT mice at 3 months: type IIb fibers were present dominantly in the VL and outer part of the RF and IC, and type IIa fibers were present dominantly in the inner part of the RF, VM, VI, MF, and LG. Type I fibers, which are rare in the TA, were present in the VI, MF, and LG. In particular, the VL and IC were mostly composed of type IIb fibers and rarely contained other fiber types ( ). In contrast, all BMD and mdx mice showed the same fiber type deviation in the quadriceps and erector spinae at 3 months, however, the ratio of type IIa fibers in the inner part of the RF, VM, VI, MF, and LG was lower than that in WT mice. In contrast, the ratio of type I and IIb fibers was unchanged or slightly increased in all BMD and mdx mice compared to that in WT mice ( ). We examined the cross-sectional areas (CSAs) of TA muscles in BMD and mdx mice to evaluate the association between type IIa fiber reduction and muscle atrophy. Their CSAs were rather high at 3 months, in accordance with muscle hypertrophy, compared with those of WT mice ( ). In addition, Murf1 and atrogin-1 mRNA expression levels (representative muscle atrophy inducing factors) did not differ among WT, BMD, and mdx mice ( ). Type IIa fibers were decreased after 3 months in BMD mice The composition of muscle fiber types changes according to postnatal development in mice ( ). We examined the changes in muscle fiber type composition during development in WT, BMD, and mdx mice, especially focusing on the type IIa fiber transition. Immunohistochemistry of MYH-1, -2, and -4 in the TA at 1, 3, 6, and 12 months revealed time-course changes in the number of type IIa fibers in WT, BMD, and mdx mice ( ). In WT mice, the count of type IIa fibers was approximately 100 fibers in a TA section at 1 month, and were twofold increased at 3 and 6 months, then was slightly down-regulated at 12 months. In contrast, the count of type IIa fibers in BMD mice was the same as that in WT mice at 1 month but decreased after 3 months, which is opposite to that of WT mice. Furthermore, in mdx mice, the number of type IIa fibers was low at 1 month and did not recover throughout the observation period ( ). Type IIa fibers were delayed in recovery at cardiotoxin-induced regeneration models in WT mice Although the mechanisms of type IIa fiber reduction in BMD mice remain unclear, there is a report about a shift in the muscle fiber type composition in a cardiotoxin (CTX)-induced regeneration model in the TA of WT mice, which suggested the vulnerability of type IIa fibers to CTX injection ( ). Next, we observed the changes in type IIa, IIb, and IIx fibers in the point on days 0 (CTX (−)), 1, 3, 5, 7, 14, and 28 after CTX injection. HE staining of TA after CTX injection showed extensive muscle fiber necrosis on day 1, infiltration of small mononuclear cells between the necrotic fibers on day 3, appearance of regenerated centronuclear muscle fibers on day 5, and progressive repair of muscle tissues with growth of regenerated muscle fibers on days 7, 14, and 28 after CTX injection ( , left panels ). MYH-1, -2, and -4 immunohistochemistry in the TA after CTX injection revealed elimination of stainability in all type IIa, IIb, and IIx fibers according to muscle necrosis at days 0 and 3 after CTX injection. On days 5 and 7 after CTX injection, only type IIb fibers appeared, with the recovery of regenerated muscle fibers. The recovery of type IIa and IIx fibers was delayed compared to that of type IIb fibers: days 14 and 28 after CTX injection in type IIa and IIx fibers, respectively ( , middle and right panels). Capillary formation contacting type IIa fiber was altered in BMD mice compared with WT mice It is hypothesized that vascular dysfunction accompanied by sarcolemmal nNOS reduction is involved in the pathomechanisms of muscle impairment in human DMD ( ; ). Thus, we examined the capillaries supporting muscle fibers using PECAM-1 immunohistochemistry at the inner and outer parts of the TA in WT, d45–49, and mdx mice at 3 months. WT mice showed ‘ring-pattern’ capillaries contacting type IIa and IIx fibers at the inner part of TA. On the contrary, capillaries contacting type IIa and IIx fibers showed morphological changes to ‘dot-pattern’ at the inner part of TA in d45–49 and mdx mice ( ). Especially in type IIa fibers, capillaries that circumferentially contact muscle fibers were rich in WT mice, and showed remarkable ‘ring-pattern’ around type IIa fibers in WT mice. In contrast, around type IIx fibers in WT mice, capillaries contacting muscle fibers were poor compared with those of type IIa fibers, with ‘incomplete ring-patterns’ around type IIx fibers in WT mice. In d45–49 and mdx mice, these circumferentially contacting capillaries were reduced around the type IIa and IIx fibers, and changed to a ‘dot-pattern’. Capillaries contacting around type IIb and I fibers showed a ‘dot-pattern’ even in WT mice ( ). According to these morphological capillary changes, the endothelial area in contact with both type IIa and IIx fibers were decreased in d45–49 and mdx mice compared with WT mice. The reduction rate around type IIa fibers (60%) was larger than that around type IIx fibers (35%), reflecting the difference between the ‘ring-pattern’ around type IIa fibers and the ‘incomplete ring-pattern’ around type IIx fibers in WT mice ( ). Dmd To create mouse strains carrying three in-frame deletions: d45–48, d45–47, and d45–49, we used CRISPR/Cas9 genome editing technique to introduce approximately 160,000 bp genomic deletions into the endogenous murine Dmd . Guide RNAs (gRNAs) were designed to target protospacer adjacent motif sequences upstream of Dmd exon 45 and downstream of Dmd exons 47, 48, and 49. We confirmed the genomic deletion of dystrophin exons 45–47, 45–48, and 45–49 by cDNA sequencing ( ). By analyzing the deletion breakpoint, we confirmed that double-strand breaks were formed inside the sequences corresponding to gRNAs in d45–48, d45–47, and d45–49 mice ( ). We performed phenotypic testing of WT, d45–48, d45–47, d45–49, and mdx mice at 1, 3, 6, and 12 months of age. d45–49 and mdx mice had large body weights at 1 and 3 months of age, but the d45–48 and d45–47 mice did not differ from that in WT mice ( ). To assess the muscle strength of the BMD mice, we conducted forelimb grip tests at 1, 3, 6, and 12 months of age, and the hanging wire tests at 3 months of age, and compared with WT and mdx mice. At the age of 1 month, d45–47, d45–49, and mdx mice showed reduced forelimb grip strength compared to WT and d45–48 mice. However, the weakness of d45–47 mice was milder compared to d45–49, and mdx mice of the same age were used. At the age of 3 months, reduced grip strength in the d45–47 mice was remarkable, and d45–48 mice exhibited a slight reduction in forelimb grip strength. Finally, after 6 months, all three BMD mice reached the same levels of reduced forelimb grip strength, although not as much as that of mdx mice ( ). The hanging wire test showed reduced suspension times in the d45–49 and mdx mice, although the WT, d45–48, and d45–47 mice exceeded the 600 s time limit ( ). Furthermore, the observation using video for the hanging wire test revealed changes in the mice moving during wire grasping: WT, d45–48, and d45–47 mice could quickly bring their posterior legs up toward their faces and catch the wire; in contrast, d45–49 and mdx mice could not raise their posterior legs above their chest ( ). Hematoxylin–eosin (HE) staining of the tibialis anterior (TA) muscles showed muscle degeneration and regeneration (i.e., opaque fibers, necrotic fibers, and centronuclear fibers) in all BMD and mdx mice, except for WT mice ( ). Opaque fibers, which are thought to be precursors of necrotic fibers, increased at an earlier age of 1 month in d45–49 mice compared with WT mice; in contrast, the proportion of opaque fibers differs significantly between d45–47 and WT mice at 3 months, with an increased tendency only in 1-month-old mice ( ). Necrotic fibers were seen only in mdx mice at 1 month, but were also increased in d45–49 mice at 3 months ( ). Centronuclear fibers, which are thought to be regenerated fibers after muscle damage, were increased in mdx mice and slightly increased in all BMD mice at 1 month. At 3 months, centronuclear fibers were increased in the d45–49 mice ( ). In addition, Sirius Red staining of the TA muscles revealed an increase in fibrosis in all BMD and mdx mice, except in WT mice ( ). At 1 month, an increased fibrotic area was seen only in mdx mice, but after 3 months, it was seen also in d45–49 mice. After 6 months, fibrosis gradually became apparent, even in d45–48 and d45–47 mice ( ). HE and Sirius Red staining revealed increased centronuclear fibers and fibrosis in the diaphragm muscles, and increased myocardial fibrosis in the cardiac muscles in all BMD mice at 12 months ( ). However, these changes were minor compared with those observed in mdx mice. All BMD mice exhibited muscle degeneration and fibrosis; however, these findings became more apparent with exon deletions: the time-course histopathological observations revealed that muscle degeneration and fibrosis deteriorated in the following order: d45–49, d45–47, and d45–48 mice ( ). We confirmed the immunohistochemical analysis in TA muscles of dystrophin and dystrophin–glycoprotein complex (DGC) proteins: alpha-sarcoglycan (aSG) and neuronal nitric oxide synthase (nNOS) at 1 and 3 months. Dystrophin immunohistochemistry demonstrated a reduction of dystrophin and ‘faint’ and ‘patchy’ staining patterns in all BMD mice. Consistent with the lack of an nNOS-binding site, encoded by exons 42–45, all BMD mice showed decreased levels of nNOS expression in the sarcolemma to levels similar to mdx mice. On the contrary, consistent with the remaining dystroglycan-binding site: encoded by exons 63–70, the expression of aSG was detected on the sarcolemma in all BMD mice, whereas the expression levels were slightly low compared with WT mice ( ). Western blot (WB) analysis demonstrated truncated dystrophin expression in all BMD mice at 1 and 3 months ( ). The truncated dystrophin expression levels were reduced to 30–40% at 1 month, and 10–20% at 3 months compared to those in WT mice ( ), despite adequate levels of dystrophin mRNA expression ( ). By WB analysis, the utrophin expression levels showed only an increased tendency in all BMD mice at 3 months, whereas there was a significant increase in mdx mice (8-fold at 1 month and 30-fold at 3 months) compared to WT mice ( ). In contrast, utrophin mRNA expression levels did not differ among WT, BMD, and mdx mice ( ). Further, WB analysis demonstrated decreased levels of nNOS in all BMD mice as well as mdx mice at 3 months ( ), despite adequate levels of nNOS mRNA expression ( ). Generally, patients with muscle diseases, including BMD, show changes in MRI or CT images of a particular group of muscles ( ; ). This allows investigators to use muscle images as a diagnostic tool for diagnosis. Thus, we examined whether the histopathological changes are different in particular parts of the muscle in BMD mice at 3 months, by observing the inner or outer part of the TA, four parts of the quadriceps: vastus lateralis (VL), rectus femoris (RF), vastus medialis (VM), vastus intermedius (VI), and three parts of the erector spinae: multifidus (MF), longissimus (LG), and iliocostalis (IC). mdx mice showed diffuse muscle degeneration on HE staining and no remarkable histopathological differences between the inner and outer part of the TA. In contrast, BMD mice showed significant site-specific muscle degeneration in the inner part of the TA compared with the outer part ( , upper panel ). Even in the quadriceps and erector spinae, d45–49 mice showed site-specific muscle degeneration predominantly in the RF, VM, VI, MF, and LG, whereas less muscle degeneration was observed in the VL and IC. In contrast, mdx mice exhibited diffuse muscle generation in all parts of the quadriceps and erector spinae ( ). Serum creatine kinase (CK) levels at 1, 3, 6, and 12 months were high in mdx mice, and were two- to fourfold higher in all BMD mice after 3 months compared to WT mice, although the difference was not statistically significant ( ). We observed site-specific muscle degeneration in BMD mice, especially in the deep muscle or inner part of the same muscle. Next, we examined the muscle fiber composition in each muscle or part of the same muscle in WT, BMD, and mdx mice using MYH-1, -2, -4, and -7 immunohistochemistry, corresponding to type IIx, IIa, IIb, and I fibers, respectively. In WT mice at 3 months, type IIb fibers were dominant in the outer part of the TA, while type IIa and IIx fibers were dominant in the inner part of the TA, although type I fibers were rare in the TA ( , middle and lower panels ). These deviations in muscle fiber type in the TA were also seen in BMD and mdx mice at 3 months, however, the ratio of type IIa fibers in the inner part of the TA was decreased in all BMD and mdx mice compared with WT mice. In contrast, the ratio of type IIb to type IIx fibers was unchanged in all BMD and mdx mice compared with that in WT mice ( ). The deviation in muscle fiber type was seen even in the quadriceps and erector spinae in WT mice at 3 months: type IIb fibers were present dominantly in the VL and outer part of the RF and IC, and type IIa fibers were present dominantly in the inner part of the RF, VM, VI, MF, and LG. Type I fibers, which are rare in the TA, were present in the VI, MF, and LG. In particular, the VL and IC were mostly composed of type IIb fibers and rarely contained other fiber types ( ). In contrast, all BMD and mdx mice showed the same fiber type deviation in the quadriceps and erector spinae at 3 months, however, the ratio of type IIa fibers in the inner part of the RF, VM, VI, MF, and LG was lower than that in WT mice. In contrast, the ratio of type I and IIb fibers was unchanged or slightly increased in all BMD and mdx mice compared to that in WT mice ( ). We examined the cross-sectional areas (CSAs) of TA muscles in BMD and mdx mice to evaluate the association between type IIa fiber reduction and muscle atrophy. Their CSAs were rather high at 3 months, in accordance with muscle hypertrophy, compared with those of WT mice ( ). In addition, Murf1 and atrogin-1 mRNA expression levels (representative muscle atrophy inducing factors) did not differ among WT, BMD, and mdx mice ( ). The composition of muscle fiber types changes according to postnatal development in mice ( ). We examined the changes in muscle fiber type composition during development in WT, BMD, and mdx mice, especially focusing on the type IIa fiber transition. Immunohistochemistry of MYH-1, -2, and -4 in the TA at 1, 3, 6, and 12 months revealed time-course changes in the number of type IIa fibers in WT, BMD, and mdx mice ( ). In WT mice, the count of type IIa fibers was approximately 100 fibers in a TA section at 1 month, and were twofold increased at 3 and 6 months, then was slightly down-regulated at 12 months. In contrast, the count of type IIa fibers in BMD mice was the same as that in WT mice at 1 month but decreased after 3 months, which is opposite to that of WT mice. Furthermore, in mdx mice, the number of type IIa fibers was low at 1 month and did not recover throughout the observation period ( ). Although the mechanisms of type IIa fiber reduction in BMD mice remain unclear, there is a report about a shift in the muscle fiber type composition in a cardiotoxin (CTX)-induced regeneration model in the TA of WT mice, which suggested the vulnerability of type IIa fibers to CTX injection ( ). Next, we observed the changes in type IIa, IIb, and IIx fibers in the point on days 0 (CTX (−)), 1, 3, 5, 7, 14, and 28 after CTX injection. HE staining of TA after CTX injection showed extensive muscle fiber necrosis on day 1, infiltration of small mononuclear cells between the necrotic fibers on day 3, appearance of regenerated centronuclear muscle fibers on day 5, and progressive repair of muscle tissues with growth of regenerated muscle fibers on days 7, 14, and 28 after CTX injection ( , left panels ). MYH-1, -2, and -4 immunohistochemistry in the TA after CTX injection revealed elimination of stainability in all type IIa, IIb, and IIx fibers according to muscle necrosis at days 0 and 3 after CTX injection. On days 5 and 7 after CTX injection, only type IIb fibers appeared, with the recovery of regenerated muscle fibers. The recovery of type IIa and IIx fibers was delayed compared to that of type IIb fibers: days 14 and 28 after CTX injection in type IIa and IIx fibers, respectively ( , middle and right panels). It is hypothesized that vascular dysfunction accompanied by sarcolemmal nNOS reduction is involved in the pathomechanisms of muscle impairment in human DMD ( ; ). Thus, we examined the capillaries supporting muscle fibers using PECAM-1 immunohistochemistry at the inner and outer parts of the TA in WT, d45–49, and mdx mice at 3 months. WT mice showed ‘ring-pattern’ capillaries contacting type IIa and IIx fibers at the inner part of TA. On the contrary, capillaries contacting type IIa and IIx fibers showed morphological changes to ‘dot-pattern’ at the inner part of TA in d45–49 and mdx mice ( ). Especially in type IIa fibers, capillaries that circumferentially contact muscle fibers were rich in WT mice, and showed remarkable ‘ring-pattern’ around type IIa fibers in WT mice. In contrast, around type IIx fibers in WT mice, capillaries contacting muscle fibers were poor compared with those of type IIa fibers, with ‘incomplete ring-patterns’ around type IIx fibers in WT mice. In d45–49 and mdx mice, these circumferentially contacting capillaries were reduced around the type IIa and IIx fibers, and changed to a ‘dot-pattern’. Capillaries contacting around type IIb and I fibers showed a ‘dot-pattern’ even in WT mice ( ). According to these morphological capillary changes, the endothelial area in contact with both type IIa and IIx fibers were decreased in d45–49 and mdx mice compared with WT mice. The reduction rate around type IIa fibers (60%) was larger than that around type IIx fibers (35%), reflecting the difference between the ‘ring-pattern’ around type IIa fibers and the ‘incomplete ring-pattern’ around type IIx fibers in WT mice ( ). In this study, we established three types of BMD mice carrying the d45–48, d45–47, and d45–49. All BMD mice showed muscle weakness and histopathological changes, including muscle degeneration and fibrosis; however, these changes appeared at different times with each exon deletion. In addition, we confirmed the decreased truncated dystrophin levels and the decreased nNOS expression levels in all BMD mice. Furthermore, unlike mdx mice, BMD mice showed site-specific muscle degeneration in particular muscle parts, especially those rich in type IIa fibers. The phenotypic evaluation confirmed that the d45–49 and mdx mice had larger body weights at 1 and 3 months compared with d45–48, d45–47, and WT mice; however, after 6 months, there was no difference in body weight between the groups. Mdx mice are generally larger in body weight at a young age than WT mice, whereas the difference between mdx and WT mice disappears at old age ( ; ). The body weights of mdx mice in our study were consistent with previous reports, and we demonstrated that the d45–49 mice showed similar changes in body weight as mdx mice. In this study, muscle weakness in forelimb grip strength in mdx mice was already apparent from 1 month and was consistent toward 12 months compared with WT mice. In contrast, all BMD mice showed muscle weakness in forelimb grip strength compared to WT mice, but the beginning of the appearance differed depending on the type of exon deletion. Muscle weakness was demonstrated in the d45–49 mice at 1 month similar to that of mdx mice, but was apparent in the d45–48 and d45–47 mice after 3 months. All BMD mice reached the same levels of muscle weakness after 6 months whereas milder than those of mdx mice. Furthermore, analysis of the hanging wire test revealed decreased power to pull up the trunk in BMD mice, especially in the d45–49 mice. Generally, mdx mice show muscle weakness in grip strength ( ) and hanging wire tests ( ), and muscle weakness is already seen at 1 month ( ; ). Furthermore, mdx mice are difficult to pull themselves up from their hanging position ( ), which was consistent with our results. The BMD rats did not show significant muscle weakness, and the BMD mice carrying the d45–47 showed muscle weakness at 10–15 weeks compared to WT mice, whereas it was milder than that in mdx mice ( ; ). On the other hand, we were able to demonstrate differences in severity due to exon deletions in BMD mice, consistent with the severity of human BMD carrying d45–48, d45–47, and d45–49. The histopathological changes in our mdx mice correspond to previous reports ( ; ; ). BMD rats showed muscle degeneration accompanied by necrotic fibers and inflammatory cell infiltration at 1 month and became conspicuous after 2 months, and muscle fibrosis was shown after 2 months ( ). In addition, BMD mice carrying d45–47 showed muscle degeneration with necrotic fibers and immune cell infiltration, and an increase in fibrotic areas ( ). In this study, all BMD mice exhibited muscle degeneration and fibrosis correspond to previous reports, furthermore, we revealed the timing of its appearance varied depending on the type of exon deletion. All BMD mice showed faint and patchy dystrophin staining patterns and decreased truncated dystrophin expression levels. These findings are typical in the muscles of patients with BMD ( ; ). Furthermore, sarcolemmal nNOS reduction was observed in all BMD mice to levels similar to those in mdx mice, consistent with the lack of an nNOS-binding site from exon deletions. Unlike normal, sarcolemmal nNOS expression is decreased and altered in localization not only in patients with DMD and mdx mice ( ), but also in patients with BMD ( ; ). The sarcolemmal expression of aSG was remaining in BMD mice; however, the expression level was slightly decreased. In patients with BMD, proteins expression of DGC including aSG are usually detected in the sarcolemma ( ; ). In our BMD mice, utrophin expression was slightly increased compared with WT mice, but this change was smaller than that in mdx mice. Utrophin is overexpressed in the muscles of human DMD ( ; ) and mdx mice ( ; ), but not in the muscles of patients wih BMD ( ; ; ; ). All of our BMD mice showed reduced nNOS expression and residual aSG expression on the sarcolemma and no overexpression of utrophin. Our BMD mice showed differences in the beginning of muscle weakness, muscle degeneration, and fibrosis accompanied by exon deletions, but the mechanisms underlying these differences from exon deletions remain unclear. We confirmed decreased levels of truncated dystrophin in all BMD mice; however, the expression levels were almost the same despite differences in phenotypic severity. Therefore, the severity of BMD in mice might be influenced by the qualitative shift in truncated dystrophin proteins along with exon deletions, and not by quantitative changes. Indeed, using an in silico prediction model, it has been reported that in-frame exon deletions possibly induce a structural shift in dystrophin, and the collapse of the dystrophin structure might influence BMD severity ( ). We demonstrated site-specific muscle degeneration in BMD mice, unlike mdx mice, which, showed diffuse and non-specific muscle degeneration. Even in mdx mice, multiple and heterogeneously distributed MRI hyperintensities are seen in a short period of approximately 13–19 weeks ( ), although BMD mice showed selective and site-specific muscle degeneration, especially in the deep muscle or the inner part of the same muscle. In human BMD, specific muscles are known to change the intensity of muscle MRI images ( ). Serum CK levels were two- to fourfold higher in all BMD mice than those of WT mice after 3 months and similar to that in a previous study on a BMD rat model ( ). However, the difference in serum CK levels between BMD and WT mice was less than that between mdx and WT mice. These results may be due to the small area of muscle degeneration in BMD mice because of its uneven distribution in the deep part of each muscle compared with that in mdx mice, which showed diffuse muscle degeneration. It has been known that muscles located near the body surface are rich in white muscle fibers, while the deep muscles are rich in medium and red muscle fibers ( ). In our BMD mice, type IIa fibers were decreased compared to WT mice, especially in the inner muscle region, where type IIa fibers are abundant. And we found that type IIa fiber reduction started after 3 months in BMD mice, whereas WT mice showed type IIa fiber increment after 3 months. In contrast, type I, IIx, and IIb fibers were unchanged in BMD mice compared with WT mice ( ). Type IIa fibers are also known as fast-oxidative fibers ( ) and are known to have fatigue resistance compared with type IIb and IIx fibers, known as having the least and the second least fatigue-resistant fibers ( ) type IIa fiber reduction is thought to contribute to muscle fatigability ( ). Recently, it was reported that mice lacking the RNA-binding protein Musashi-2, which is predominantly expressed in slow-type muscle fibers, showed a reduction in type IIa fibers along with reduced muscle contraction force ( ). These findings suggest that decrease in type IIa fibers contributes to the muscle weakness and fatigability in BMD mice. In addition, only type I, IIa, and IIx fibers are present in human skeletal muscle, and type IIb fibers appear to be absent ( ; ). Thus, the reduction of type IIa fibers might have a greater effect on the skeletal muscle function of human BMD than that of BMD mice. We examined the association between type IIa fiber reduction and muscle atrophy, but there was no remarkable CSA reduction or changes in muscle atrophy inducing factors. The vulnerability of type IIa fibers to CTX-induced muscle damage has been reported ( ), and we also observed that the recovery of type IIa and IIx fibers was delayed compared to that of type IIb fibers after CTX injection. This vulnerability and delayed recovery of type IIa fibers may partly explain the type IIa fiber reduction in BMD mice, but the recovery of type IIx fibers was slower than that of type IIa fibers after CTX injection. Therefore, the type IIa fiber-specific decay in BMD mice might not be explained by this vulnerability and delayed recovery during muscle degeneration and regeneration. A decrease in type IIa fibers was also observed in microgravity-induced muscle atrophy in WT mice, where a decrease in sarcolemmal nNOS occurred ( ). It has been reported that the expression level of nNOS is higher in type IIa fibers than in type I and IIb fibers ( ) and that nNOS-deficient mice have reduced type IIa fibers compared to WT mice ( ) and reduced capillary density only in the inner part of the muscle ( ). By examining BMD mice, we found for the first time morphological capillary changes from a ‘ring-pattern’ to a ‘dot-pattern’ with decreased nNOS expression in the sarcolemma and fewer capillaries in circumferential contact with type IIa fibers. Capillary changes to a ‘dot-pattern’ were also seen around type IIx fibers in BMD mice, but in WT mice, capillaries around type IIx fibers were poor compared with that of type IIa fibers, and showed an ‘incomplete ring-pattern’. In addition, capillaries around type IIb and I fibers showed a ‘dot-pattern’ even in WT mice. These results suggest that type IIa fibers may require numerous capillaries and maintained blood flow, compared with other muscle fibers, and this high requirement for blood flow might be associated with the type IIa fiber-specific decay in BMD mice. Between the main muscle capillaries running parallel to the muscle fibers, there are transversely interconnected branches and capillary loops ( ), and the capillary changes in BMD mice may be associated with the deterioration of these interconnected branches and capillary loops ( ). We examined transversely interconnected branches and capillary loops, using longitudinal muscle sections. We confirmed that there were fewer interconnected capillaries in BMD and mdx mice than in WT mice ( ). Vascular dysfunction has been implicated in muscle damage in canine model of DMD ( ) and in human DMD ( ), and our analysis of BMD mice also suggests that the reduction of type IIa fibers may be influenced by vascular dysfunction with reduced sarcolemmal nNOS as well as fragility and delayed recovery. Animals WT control mice (strain: C57BL/6J) were purchased from Jackson Laboratory (Bar Harbor, ME, USA), and dystrophin-deficient mdx mice (strain: C57BL/6J) were a gift from the National Center of Neurology and Psychiatry (Tokyo, Japan). The mice were housed in plastic cages in a temperature-controlled environment (23 ± 2°C) with a 12-hr light/dark cycle and free access to food and water. All animal experiments were performed per the institutional guidelines and approved by the Institutional Review Board of Shinshu University, Japan. Generation of CRISPR/Cas9-induced BMD mice To introduce the mutations d45–48, d45–47, and d45–49 in the mouse Dmd , we designed four gRNA corresponding to intron sequences 44, 47, 48, and 49 (gRNA-44, -47, -48, and -49, respectively). Next, we electroporated two gRNA at the combination of gRNA-44 and -47 to generate d45–47, gRNA-44 and -48 to generate d45–48, and gRNA-44 and -49 to generate d45–49, with Cas9-nuclease (Integrated DNA Technologies), into embryos from C57BL/6J female mice. F0 mice with the three desired mutations were selected based on the results of PCR analysis and DNA sequencing of tail DNA using intron primers set forward and backward at the deletions. F0 female mice were repeatedly backcrossed with WT male mice, and then F4–F8 male BMD mice having d45–48, d45–47, and d45–49 were used in our study, with WT control and mdx mice (male mice; n = 3; at the ages of 1, 3, 6, and 12 months). Furthermore, we examined multiple mouse lines with the same exon deletions of d45–48 and d45–47, and confirmed muscle weakness and pathological changes similar to those observed in the mouse line used in this study (data not shown). Serum CK Whole blood was collected from the abdominal aorta under anesthesia with isoflurane in WT, d45–48, d45–47, d45–49, and mdx mice at 1, 3, 6, and 12 months ( n = 3), and were centrifuged at 3000 × g for 12 min at 4°C. CK activity in the separated serum was assayed using an automated biochemical analyzer (JCA-BM6050, JEOL) at Oriental Yeast Co, Ltd (Tokyo, Japan). Skeletal muscle function The forelimb grip strength was performed in all mice at 1, 3, 6, and 12 months ( n = 10 at 1 and 3 months, n = 4 at 6 and 12 months) complying with ‘Assessing functional performance in the mdx mouse model’ ( ). Briefly, mice were placed with their forelimbs on a T-shaped bar and gently pulled backward until their grasps broke. Peak force was automatically recorded using a grip meter (MK-380V, Muromachi). Fifteen tests were performed with a short resting period between each test, and grip strength normalized to body weight was determined by taking the average of the three highest of the 15 values. The hanging wire test was performed at 3 months complying with the same assessments ( ). Muscle tissue extraction and preparation TA muscles were dissected from WT, d45–48, d45–47, d45–49, and mdx mice at 1, 3, 6, and 12 months ( n = 3), and were frozen in isopentane cooled by liquid nitrogen for histological and immunohistochemical analyses and protein and RNA isolation, and were stored at −80°C. Seven-µm-thick transverse cryostat sections were cut in the center of TA muscles, placed on slides, air-dried, and stained with hematoxylin and eosin (HE) and Sirius Red. The quadriceps, erector spinae, diaphragm, and heart muscles were dissected, frozen, and stained with HE as noted above. Sections were viewed and photographed using a BZ-X710 digital camera system (Keyence). Immunohistochemical analysis For immunofluorescence staining, serial 7-µm-thick cross sections of frozen skeletal muscle tissues were mounted on glass slides. The sections were air-dried and blocked in 20% goat serum in phosphate-buffered saline (PBS) for 15 min and incubated with primary antibodies in 5% goat serum in PBS at 4°C overnight. The sections were washed briefly with PBS before incubation with secondary antibodies for 3 hr at 4°C and then washed four times with PBS. The slides were mounted using VECTASHIELD mounting medium (Vector Laboratories) and images were captured using a BZ-X710 digital camera system (Keyence). The primary antibodies were as follows: mouse anti-dystrophin (NCL-DYS2, Leica Biosystems), rabbit anti-nNOS (61-7000, Invitrogen), mouse anti-alpha-sarcoglycan (NCL-a-SARC, Leica Biosystems), mouse anti-MYH-1 (6H1, DSHB), mouse anti-MYH-2 (SC-71, DSHB), mouse anti-MYH-4 (BF-F3, DSHB), mouse anti-MYH-7 (BA-D5, Millipore), and rat anti-PECAM-1 (550274, BD Bioscience). Morphometric analysis Morphometric analysis was performed to identify opaque, necrotic, and centronuclear fibers using HE-stained TA at 1 and 3 months ( n = 3), and to identify fiber type composition using MYH-1, -2, -4, and -7 (type IIx, IIa, IIb, and I fibers, respectively) immunohistochemically staind TA, quadriceps, and erector spinae muscles at 3 months ( n = 3). At least 500 fibers were analyzed for each muscle, and we calculated the percentages of opaque, necrotic, and centronuclear fibers, and the percentages of type IIx, IIa, IIb, and I fibers. All images were obtained under identical conditions at the same magnification. A study for muscle fibrosis was performed on 7-µm-thick TA muscle sections stained with Sirius Red stain at 1, 3, 6, and 12 months. The area occupied by fibrosis was detected using ImageJ, and the percent fibrosis area was calculated. In addition, studies on capillaries contacting type IIa fibers were performed on MYH-2 and PECAM-1 immunohistochemically stained muscle sections, and the number and size of PECAM-1-positive capillaries contacting type IIa fibers were counted. The endothelial area was calculated using the number and size of capillaries according to previous methods ( ; ). Total protein extract and western blotting Muscle tissues (20 mg) were homogenized in 150 µl of RIPA buffer (WAKO) containing Halt phosphatase and proteinase inhibitor cocktail (Thermo Fisher Scientific) using an ultrasonic homogenizer (VCX-130; Sonic and Materials INC). After centrifugation (20 min at 15,000 × g ), the protein concentration in the supernatant was estimated using a BCA Protein Assay Kit (Thermo Fisher Scientific). Protein extracts from each sample were denatured for 5 min at 95°C in NuPAGE LDS Sample Buffer (Thermo Fisher Scientific), and 10 µg/lane protein extracts were submitted to 3–8% NuPAGE Novex Tris-acetate gel electrophoresis (Thermo Fisher Scientific) with HiMark pre-stained standard proteins (Thermo Fisher Scientific) at 150 V for 70 min. The resulting gel was transferred onto a 0.2-µm nitrocellulose membrane (Bio-Rad) at 400 mA for 40 min using an EzFastBlot HMW buffer (ATTO). Membranes were incubated with primary antibodies and peroxidase-conjugated secondary antibodies (Bio-Rad) using an iBind Flex Western Device (Thermo Fisher Scientific). All membranes were visualized using ECL Prime western blotting detection reagent (Cytive) and ChemiDoc (Bio-Rad). Band intensity and total protein normalization were determined using Image Lab software (Bio-Rad). The primary antibodies were as follows: rabbit anti-dystrophin (ab15277, Abcam) and mouse anti-utrophin (sc-33700, Santa Cruz Biotechnology). RNA isolation and RT-PCR Frozen muscle tissues (20 mg) were homogenized and total RNA was isolated using the RNeasy Fibrous Tissue Kit (QIAGEN). cDNA was synthesized using the QuantiTect Reverse Transcription Kit (QIAGEN, Hilden, Germany). The levels of mRNA and 18 S rRNA were quantified by qPCR using Fast SYBR Green Master Mix (Thermo Fisher Scientific) and QuantStudio 3 (Thermo Fisher Scientific) with 10 nM of each primer at a final volume of 10 µl. Thermal cycling conditions for all primers were 10 min at 95°C, then 40 cycles each of 15 s at 94°C and 30 s at 60°C. Each mRNA quantity was calculated using the delta-delta-CT method with 18S rRNA as the housekeeping gene. The primer sequences used for RT-PCR are shown in . CTX muscle injury and histochemistry We injected 100 µl of CTX (10 mM in 0.9% NaCl) (Sigma-Aldrich) into the TA muscle of WT mice at 6 weeks of age using a 27-gauge needle and a 1-ml syringe under anesthesia with isoflurane according to the previous methods ( ). The TA muscles were isolated before the injection, and 1, 3, 7, and 14 days after ( n = 1); 7 µm transverse cryostat sections of frozen muscles were stained with HE and with MYH-1, -2, -4, and -7 immunohistochemistry. Statistical analysis Results are expressed as ± SEM. Statistical analysis was performed to establish the significance between groups using a one-way ANOVA. Intergroup comparisons were performed using Bonferroni correction. Statistical significance was set at p < 0.05. Statistical analyses were performed using GraphPad Prism version 9.0 (GraphPad Software Inc, La Jolla, CA, USA). WT control mice (strain: C57BL/6J) were purchased from Jackson Laboratory (Bar Harbor, ME, USA), and dystrophin-deficient mdx mice (strain: C57BL/6J) were a gift from the National Center of Neurology and Psychiatry (Tokyo, Japan). The mice were housed in plastic cages in a temperature-controlled environment (23 ± 2°C) with a 12-hr light/dark cycle and free access to food and water. All animal experiments were performed per the institutional guidelines and approved by the Institutional Review Board of Shinshu University, Japan. To introduce the mutations d45–48, d45–47, and d45–49 in the mouse Dmd , we designed four gRNA corresponding to intron sequences 44, 47, 48, and 49 (gRNA-44, -47, -48, and -49, respectively). Next, we electroporated two gRNA at the combination of gRNA-44 and -47 to generate d45–47, gRNA-44 and -48 to generate d45–48, and gRNA-44 and -49 to generate d45–49, with Cas9-nuclease (Integrated DNA Technologies), into embryos from C57BL/6J female mice. F0 mice with the three desired mutations were selected based on the results of PCR analysis and DNA sequencing of tail DNA using intron primers set forward and backward at the deletions. F0 female mice were repeatedly backcrossed with WT male mice, and then F4–F8 male BMD mice having d45–48, d45–47, and d45–49 were used in our study, with WT control and mdx mice (male mice; n = 3; at the ages of 1, 3, 6, and 12 months). Furthermore, we examined multiple mouse lines with the same exon deletions of d45–48 and d45–47, and confirmed muscle weakness and pathological changes similar to those observed in the mouse line used in this study (data not shown). Whole blood was collected from the abdominal aorta under anesthesia with isoflurane in WT, d45–48, d45–47, d45–49, and mdx mice at 1, 3, 6, and 12 months ( n = 3), and were centrifuged at 3000 × g for 12 min at 4°C. CK activity in the separated serum was assayed using an automated biochemical analyzer (JCA-BM6050, JEOL) at Oriental Yeast Co, Ltd (Tokyo, Japan). The forelimb grip strength was performed in all mice at 1, 3, 6, and 12 months ( n = 10 at 1 and 3 months, n = 4 at 6 and 12 months) complying with ‘Assessing functional performance in the mdx mouse model’ ( ). Briefly, mice were placed with their forelimbs on a T-shaped bar and gently pulled backward until their grasps broke. Peak force was automatically recorded using a grip meter (MK-380V, Muromachi). Fifteen tests were performed with a short resting period between each test, and grip strength normalized to body weight was determined by taking the average of the three highest of the 15 values. The hanging wire test was performed at 3 months complying with the same assessments ( ). TA muscles were dissected from WT, d45–48, d45–47, d45–49, and mdx mice at 1, 3, 6, and 12 months ( n = 3), and were frozen in isopentane cooled by liquid nitrogen for histological and immunohistochemical analyses and protein and RNA isolation, and were stored at −80°C. Seven-µm-thick transverse cryostat sections were cut in the center of TA muscles, placed on slides, air-dried, and stained with hematoxylin and eosin (HE) and Sirius Red. The quadriceps, erector spinae, diaphragm, and heart muscles were dissected, frozen, and stained with HE as noted above. Sections were viewed and photographed using a BZ-X710 digital camera system (Keyence). For immunofluorescence staining, serial 7-µm-thick cross sections of frozen skeletal muscle tissues were mounted on glass slides. The sections were air-dried and blocked in 20% goat serum in phosphate-buffered saline (PBS) for 15 min and incubated with primary antibodies in 5% goat serum in PBS at 4°C overnight. The sections were washed briefly with PBS before incubation with secondary antibodies for 3 hr at 4°C and then washed four times with PBS. The slides were mounted using VECTASHIELD mounting medium (Vector Laboratories) and images were captured using a BZ-X710 digital camera system (Keyence). The primary antibodies were as follows: mouse anti-dystrophin (NCL-DYS2, Leica Biosystems), rabbit anti-nNOS (61-7000, Invitrogen), mouse anti-alpha-sarcoglycan (NCL-a-SARC, Leica Biosystems), mouse anti-MYH-1 (6H1, DSHB), mouse anti-MYH-2 (SC-71, DSHB), mouse anti-MYH-4 (BF-F3, DSHB), mouse anti-MYH-7 (BA-D5, Millipore), and rat anti-PECAM-1 (550274, BD Bioscience). Morphometric analysis was performed to identify opaque, necrotic, and centronuclear fibers using HE-stained TA at 1 and 3 months ( n = 3), and to identify fiber type composition using MYH-1, -2, -4, and -7 (type IIx, IIa, IIb, and I fibers, respectively) immunohistochemically staind TA, quadriceps, and erector spinae muscles at 3 months ( n = 3). At least 500 fibers were analyzed for each muscle, and we calculated the percentages of opaque, necrotic, and centronuclear fibers, and the percentages of type IIx, IIa, IIb, and I fibers. All images were obtained under identical conditions at the same magnification. A study for muscle fibrosis was performed on 7-µm-thick TA muscle sections stained with Sirius Red stain at 1, 3, 6, and 12 months. The area occupied by fibrosis was detected using ImageJ, and the percent fibrosis area was calculated. In addition, studies on capillaries contacting type IIa fibers were performed on MYH-2 and PECAM-1 immunohistochemically stained muscle sections, and the number and size of PECAM-1-positive capillaries contacting type IIa fibers were counted. The endothelial area was calculated using the number and size of capillaries according to previous methods ( ; ). Muscle tissues (20 mg) were homogenized in 150 µl of RIPA buffer (WAKO) containing Halt phosphatase and proteinase inhibitor cocktail (Thermo Fisher Scientific) using an ultrasonic homogenizer (VCX-130; Sonic and Materials INC). After centrifugation (20 min at 15,000 × g ), the protein concentration in the supernatant was estimated using a BCA Protein Assay Kit (Thermo Fisher Scientific). Protein extracts from each sample were denatured for 5 min at 95°C in NuPAGE LDS Sample Buffer (Thermo Fisher Scientific), and 10 µg/lane protein extracts were submitted to 3–8% NuPAGE Novex Tris-acetate gel electrophoresis (Thermo Fisher Scientific) with HiMark pre-stained standard proteins (Thermo Fisher Scientific) at 150 V for 70 min. The resulting gel was transferred onto a 0.2-µm nitrocellulose membrane (Bio-Rad) at 400 mA for 40 min using an EzFastBlot HMW buffer (ATTO). Membranes were incubated with primary antibodies and peroxidase-conjugated secondary antibodies (Bio-Rad) using an iBind Flex Western Device (Thermo Fisher Scientific). All membranes were visualized using ECL Prime western blotting detection reagent (Cytive) and ChemiDoc (Bio-Rad). Band intensity and total protein normalization were determined using Image Lab software (Bio-Rad). The primary antibodies were as follows: rabbit anti-dystrophin (ab15277, Abcam) and mouse anti-utrophin (sc-33700, Santa Cruz Biotechnology). Frozen muscle tissues (20 mg) were homogenized and total RNA was isolated using the RNeasy Fibrous Tissue Kit (QIAGEN). cDNA was synthesized using the QuantiTect Reverse Transcription Kit (QIAGEN, Hilden, Germany). The levels of mRNA and 18 S rRNA were quantified by qPCR using Fast SYBR Green Master Mix (Thermo Fisher Scientific) and QuantStudio 3 (Thermo Fisher Scientific) with 10 nM of each primer at a final volume of 10 µl. Thermal cycling conditions for all primers were 10 min at 95°C, then 40 cycles each of 15 s at 94°C and 30 s at 60°C. Each mRNA quantity was calculated using the delta-delta-CT method with 18S rRNA as the housekeeping gene. The primer sequences used for RT-PCR are shown in . We injected 100 µl of CTX (10 mM in 0.9% NaCl) (Sigma-Aldrich) into the TA muscle of WT mice at 6 weeks of age using a 27-gauge needle and a 1-ml syringe under anesthesia with isoflurane according to the previous methods ( ). The TA muscles were isolated before the injection, and 1, 3, 7, and 14 days after ( n = 1); 7 µm transverse cryostat sections of frozen muscles were stained with HE and with MYH-1, -2, -4, and -7 immunohistochemistry. Results are expressed as ± SEM. Statistical analysis was performed to establish the significance between groups using a one-way ANOVA. Intergroup comparisons were performed using Bonferroni correction. Statistical significance was set at p < 0.05. Statistical analyses were performed using GraphPad Prism version 9.0 (GraphPad Software Inc, La Jolla, CA, USA).
Examining the use of alternative light sources in medico-legal assessments of blunt-force trauma: a systematic review
1bab892b-32c4-4b42-9da0-f42f072664bf
11306313
Forensic Medicine[mh]
Bruises serve as markers of blunt-force trauma and may yield valuable clues into the mechanisms of injury . An accurate and comprehensive bruise analysis is therefore warranted in cases of suspected abuse and assault. However, despite its forensic significance, the task of identifying and documenting bruises remains difficult due to a myriad of factors influencing their visibility. This includes, the degree of inflicted trauma, the dynamic and distinct process of healing as well as the diversity of varying skin tones , that may result in the absence of visible bruising or the presence of bruises deem too minor to document during medico-legal examinations . To overcome this challenge, a growing volume of research has explored the possibility of using alternate light sources (ALS) to enhance blunt-force trauma documentation . Light can be categorized by its wavelength into the visible light spectrum (VLS), narrowband light between 400 and 700 nm, and the invisible light spectrum, comprising both ultraviolet (UV) and infrared (IR) light composed of wavelengths below 400 nm and above 700 nm, respectively (Fig. ). ALS refers to the use of single and narrowband wavelengths within the full spectra for illumination and are used by law-enforcement worldwide to detect biological traces such as blood and semen, as well as chemical agents including gunshot residue . When photons of particular wavelengths are absorbed, they induce electron transitions to higher energy orbits. Fluorescence occurs when excited electrons return to lower energy states, releasing energy in the form of photons with a lower energy and longer wavelength compared to the excitation light, referred to as Stoke’s Shift . Consequently, emitted light is not visible to the naked eye, requiring the use of specific longpass or bandpass filters that block the return of the excitation light . The hallmark of bruising is the discoloration that occurs as a consequence of ruptured vessels in the dermal layer of the skin. Visualizing the extravasated blood using normal or conventional white light (CWL) is challenging however, as the majority of light is both reflected by the skin’s surface and absorbed by melanin, secreted by melanocytes located between the surface and dermal layer . This becomes particularly prevalent in darker skin where higher concentrations of melanocytes persist. On the other hand, emission of a single or narrowband wavelength may penetrate the skin and be absorbed specifically by hemoglobin and its associated breakdown products . This can be perceived as darkened regions on the skin when viewed through distinct filters . Hence, employing ALS to visualize bruising may circumvent the obstacles presented by white light reflection and melanin concentrations. In the age of evidence-based medicine, forensic methods must demonstrate their scientific rigor to ensure that accurate and reliable results are presented during legal proceedings. Consequently, examining the specificity and sensitivity of ALS to understand its effectiveness in discerning bruising from non-bruising, and detecting all bruising, is paramount. Bruise detection and bruise visibility are related concepts, but refer to different aspects of bruise sensitivity. Bruise detection is the process of identifying the presence of a bruise, while bruise visibility relates to how apparent or noticeable a bruise is once it has been detected. Specificity on the other hand refers to the ability to differentiate bruising from non-bruising. In pursuit of such knowledge, we focus here answering the question: does the detection and visualization by ALS of blunt-force trauma outperform CWL approaches in medico-legal contexts? Research question A systematic review of the literature was conducted according to the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) framework . The objective was to address the research question: “does detection and visualization of bruising by ALS outperform CWL approaches in medico-legal contexts?” Search strategy and data sources Relevant search terms were defined following consultation with an information specialist. Search queries are described in Table , and were constructed using the Boolean operators “AND” and “OR”. Records were collected from the databases of PubMed, Medline, and CINAHL, from inception to 30 April 2024. Supplementary sources were also extracted from citations lists of selected studies if deemed relevant. Eligibility criteria Inclusion and exclusion criteria were defined according to the research question that defined the population, intervention, comparison and outcome (PICO). Inclusion criteria consisted of English language records published in peer-reviewed journals. Studies needed to include a sample population that was of a human model, with living individuals that presented bruising from blunt-force trauma (including bite marks). The source of the trauma was not defined. Studies needed to exhibit an intervention consisting of an ALS (UV, narrowband visible light or IR) with a CWL comparison. Records also needed to include a discussion regarding outcomes, including a statement summarizing the preferred method for visualizing or detecting a bruising. Investigations using ALS to identify biological samples outside the body such as sperm, fingerprints or gunshot reside were excluded. Selection of evidence Data was imported into Microsoft Excel (Office 2019) for further selection and cataloging. Following removal of duplications, records were screened for relevance in a systematic and sequential manner, by title, abstract and full-text. Relevance of each study was assessed by two independent researchers. Disagreements were solved during consensus discussions. Only articles detailing an original study were selected for full-text screening and editorials/commentaries, conferences proceedings, case reports and technical protocols were excluded. Study evaluation Studies were evaluated using SPICOT ( S tudy design, study p opulation, i ntervention/exposure, c ontrols/comparisons/index test, o utcome and t imespan) to systematically assess both scientific evidence and risk of bias the forensic literature (Supplementary ). Screening using SPICOT was conducted to ensure that only studies fulfilling established scientific criteria were selected to form conclusions in this review. For the risk of bias assessment in SPICOT, a predetermined set of criteria within a study’s population, control/comparison, exposure and assessment were analyzed. Within the population criterion, we examined if the population had firstly been defined, secondly if bruising was controlled for or validated, and thirdly the investigated sample size. Similarly, for controls/comparison, we examined if a negative bruise assessment had been performed, and if a CWL control had been conducted, alongside identifying sample size. For intervetion, the ALS exposure had to be defined and for the assessment criterion, we examined not only if procedures had been defined, but also if multiple independent assessors were employed and if blinded assessments had occurred. All studies were assessed in each category described to determine a combined level of evidence and risk of bias (categorized as low (0–9 points), medium (10–16 points), or high (17–20 points)). This scoring process was carried out by a sole researcher. Those scoring SPICOT-low and SPICOT-medium, were additionally assessed by a separate independent researcher. If variations in scores impacted SPICOT classification, consensus discussions were held to decide final score. Studies that both researchers identified as having a SPICOT-low were excluded. Data extraction A summary of the information extracted from studies is described in Table . In brief, this included publication type and details regarding date of publication. The data source was also extracted in addition to an identification of the study design by the researcher. Information regarding study population was extracted, including age and skin color, as well as bruise infliction method and location on body. Population size ( n ) was also extracted. The ALS wavelength was noted alongside the specific band/longpass filter used for detection. Assessment timepoint(s) and metrics were extracted, as well as the methods used for processing of data/analysis, alongside information relating to the relevance of controls and control group size ( n ). Descriptions regarding the effectiveness in detecting and visualizing bruising using both ALS and CWL was recorded. Ethical consideration This study involves the analysis of existing published data and therefore did not require ethical approval. A systematic review of the literature was conducted according to the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) framework . The objective was to address the research question: “does detection and visualization of bruising by ALS outperform CWL approaches in medico-legal contexts?” Relevant search terms were defined following consultation with an information specialist. Search queries are described in Table , and were constructed using the Boolean operators “AND” and “OR”. Records were collected from the databases of PubMed, Medline, and CINAHL, from inception to 30 April 2024. Supplementary sources were also extracted from citations lists of selected studies if deemed relevant. Inclusion and exclusion criteria were defined according to the research question that defined the population, intervention, comparison and outcome (PICO). Inclusion criteria consisted of English language records published in peer-reviewed journals. Studies needed to include a sample population that was of a human model, with living individuals that presented bruising from blunt-force trauma (including bite marks). The source of the trauma was not defined. Studies needed to exhibit an intervention consisting of an ALS (UV, narrowband visible light or IR) with a CWL comparison. Records also needed to include a discussion regarding outcomes, including a statement summarizing the preferred method for visualizing or detecting a bruising. Investigations using ALS to identify biological samples outside the body such as sperm, fingerprints or gunshot reside were excluded. Data was imported into Microsoft Excel (Office 2019) for further selection and cataloging. Following removal of duplications, records were screened for relevance in a systematic and sequential manner, by title, abstract and full-text. Relevance of each study was assessed by two independent researchers. Disagreements were solved during consensus discussions. Only articles detailing an original study were selected for full-text screening and editorials/commentaries, conferences proceedings, case reports and technical protocols were excluded. Studies were evaluated using SPICOT ( S tudy design, study p opulation, i ntervention/exposure, c ontrols/comparisons/index test, o utcome and t imespan) to systematically assess both scientific evidence and risk of bias the forensic literature (Supplementary ). Screening using SPICOT was conducted to ensure that only studies fulfilling established scientific criteria were selected to form conclusions in this review. For the risk of bias assessment in SPICOT, a predetermined set of criteria within a study’s population, control/comparison, exposure and assessment were analyzed. Within the population criterion, we examined if the population had firstly been defined, secondly if bruising was controlled for or validated, and thirdly the investigated sample size. Similarly, for controls/comparison, we examined if a negative bruise assessment had been performed, and if a CWL control had been conducted, alongside identifying sample size. For intervetion, the ALS exposure had to be defined and for the assessment criterion, we examined not only if procedures had been defined, but also if multiple independent assessors were employed and if blinded assessments had occurred. All studies were assessed in each category described to determine a combined level of evidence and risk of bias (categorized as low (0–9 points), medium (10–16 points), or high (17–20 points)). This scoring process was carried out by a sole researcher. Those scoring SPICOT-low and SPICOT-medium, were additionally assessed by a separate independent researcher. If variations in scores impacted SPICOT classification, consensus discussions were held to decide final score. Studies that both researchers identified as having a SPICOT-low were excluded. A summary of the information extracted from studies is described in Table . In brief, this included publication type and details regarding date of publication. The data source was also extracted in addition to an identification of the study design by the researcher. Information regarding study population was extracted, including age and skin color, as well as bruise infliction method and location on body. Population size ( n ) was also extracted. The ALS wavelength was noted alongside the specific band/longpass filter used for detection. Assessment timepoint(s) and metrics were extracted, as well as the methods used for processing of data/analysis, alongside information relating to the relevance of controls and control group size ( n ). Descriptions regarding the effectiveness in detecting and visualizing bruising using both ALS and CWL was recorded. This study involves the analysis of existing published data and therefore did not require ethical approval. Study selection The search strategy yielded a total of 4055 studies, comprising 1883 from PubMed, 1840 from Medline and 332 from CINAHL. After removal of duplicates (2061) and systematic screening of titles and abstracts, 32 full-text articles were assessed for eligibility and 15 were further considered for SPICOT evaluation. Five studies were assessed as SPICOT-low and therefore excluded, resulting in a total of ten studies being selected for this review . The selection process is detailed in Fig. according to PRISMA guidelines . Risk of bias assessment Risk of bias assessment is represented in Table . The selected studies all had defined populations, with the majority exhibiting samples > 20 individuals. Only one study did not use an inflicted bruising control or consider a validation method to confirm bruising. While all studies conducted a CWL control, 40% did not consider a negative bruise examination/validation. In terms of assessment strategies, 60% of studies conducted blinded analysis of bruising with multiple assessors. All studies defined their ALS exposure. Characteristics of individual sources of evidence Characteristics of the individual studies are summarized in Table . Analysis demonstrates that 10% of the studies exhibited a correlation study design, 40% had a causal-effect design, and the remaining 50% had an experimental setup. The eight studies employing a controlled inflicted bruising, consisted of either a dropped metal object onto the forearm of an individual, or by paintballs fired at the upper arm. In both cases, the velocity and impact zone were controlled. The remaining studies examined bruises within clinical settings, where timing (assessment post-trauma), bruise site (area on body) and impact details (velocity) could not be controlled for. Regarding the ALS narrowband used, the majority investigated single wavelengths within the UV and VLS, with one study exploring IR and UV wavelengths in comparison to other imaging modalities in CWL, and another study examined only IR in comparison to CWL imaging techniques. It is worth noting that only one study analyzed fluorescence while the remaining examined absorption under ALS. Diagnostic measurement was considered as: sensitivity – examination was only conducted on injuries in known locations; specificity – examination was conducted on both bruising and non-bruising sites. Based on this criterion, only one study considered specificity in their diagnostic measurement. Four studies reported bruise assessments using descriptors for visibility (e.g., clear, no, bare), two measured bruise size, one, anatomical location and another, the contrast between bruised and non-bruised skin. The remaining studies utilized a novel bruise visibility scale (BVS) and absorption visibility scale (AVS). Two studies examined bruising at a single time point, whereas the remaining eight spanned a period from 30 min post-bruise infliction to four weeks post-bruising. Two studies did not report or consider their sample population skin color, with half of the remaining eight exhibiting representation across six skin categories: “very light,” “light,” “intermediate,” “tan,” “brown,” and “dark”. The remaining 50% had predominantly “white”/“light” sample populations. Results of individual sources of evidence Table summarizes findings presented in the selected studies. Collectively, the data indicate that among the ten selected studies, eight suggest that ALS is more effective than CWL in detecting and visualizing bruising, particularity mentioning its usefulness during early stages of bruise formation. Analysis reveals that wavelength filter combinations within the IR or UV spectral ranges do not outperform CWL, while narrowband wavelengths within the VLS, specifically 415 nm combined with either longpass or bandpass yellow-cut filters do. Various studies have explored the effectiveness of different single wavelength and filter combinations in detecting and enhancing bruise visibility compared to CWL. Limmen et al. demonstrated that narrowband wavelengths between 400 and 470 nm significantly increased visibility compared to CWL, reporting an improved visibility in 52% of bruises that were initially deemed “barely visible” under CWL. These finding are consistent with the known absorption peaks of oxyhemoglobin (415 nm), de-oxygemoglobin (430 nm), and bilirubin (460 nm) . Despite the declining frequency of visible observations with increasing skin pigmentation , wavelengths of 415 nm and 450 nm (paired with a yellow filter) exhibited the highest rates of bruise detection across all skin categories (415 nm: 11.2%; 450 nm: 11.1%), with 415 nm/yellow filter being the only combination that outperformed CWL in cases where skin colour was classed as “brown” or “dark” . Although the ability to detect bruises decreases over time, results from the selected literature implies that bruising may be detected and visualized sooner following trauma with an ALS than with CWL . Scafide et al. identified bruising in 98% of cases within the initial three days post-trauma when employing 415 nm /yellow filter combination, whereas only 24% were detectable under CWL. Although the use of IR was proposed to be marginally superior to CWL during bruise formation in Black et al. , no statistically significant difference was observed between the methods. Findings are similar to that reported by Trefan et al. , though IR imaging was noted to produce smaller bruise sizes compared to CWL imaging. The time frame for when ALS is more effective than CWL appears to be constrained at both ends, as studies suggest that CWL is better within the initial 30 min post-trauma and at earliest after two days post-trauma . Though further investigations are needed, as findings reported are contrasting. For instance, Nijs et al. found no significance between bruise visibility under ALS and CWL seven days post-trauma using 415 nm/yellow filter combinations while findings by Scafide et al. noted that the 450 nm /yellow filter consistently outperformed CWL in detecting bruises within a 4 week period post-injury. However, differences in analysis may account for these differences as Nijs et al. examined bruise visibility and Scafide et al. bruise detection. Nevertheless, the proposed time frame may explain why ALS performed better than CWL in the study by Limmen et al. , where the average time between injury and ALS examination was 2.6 days. The quantification of the visual degree of bruising conducted initially by Nijs et al. . expressed between one (very bad) and ten (excellent), circumvents subjective visibility descriptors such as “obvious,” “clear,” “distinct,” “faded,” and “faint”. Scafide et al. further developed this quantitative BVS, and suggesting that visibility should not be measured using the same scale for both CWL and ALS, since CWL includes the entire VLS and ALS only a narrow bandwidth. This may explain why bruises of low contrast, i.e. difficult to distinguish from surrounding skin, are more diffuse and less distinctive using IR and UV light . Scafide et al. therefore proposed a tailored BVS, referred to as the AVS when using ALS. When scales were compared, a greater bruise size was associated with higher visibility using either scale but that greater contrast in color or lightness was associated with higher BVS values alone . Future studies should therefore consider the use of the AVS to provide more unity between investigations and comparable results. The search strategy yielded a total of 4055 studies, comprising 1883 from PubMed, 1840 from Medline and 332 from CINAHL. After removal of duplicates (2061) and systematic screening of titles and abstracts, 32 full-text articles were assessed for eligibility and 15 were further considered for SPICOT evaluation. Five studies were assessed as SPICOT-low and therefore excluded, resulting in a total of ten studies being selected for this review . The selection process is detailed in Fig. according to PRISMA guidelines . Risk of bias assessment is represented in Table . The selected studies all had defined populations, with the majority exhibiting samples > 20 individuals. Only one study did not use an inflicted bruising control or consider a validation method to confirm bruising. While all studies conducted a CWL control, 40% did not consider a negative bruise examination/validation. In terms of assessment strategies, 60% of studies conducted blinded analysis of bruising with multiple assessors. All studies defined their ALS exposure. Characteristics of the individual studies are summarized in Table . Analysis demonstrates that 10% of the studies exhibited a correlation study design, 40% had a causal-effect design, and the remaining 50% had an experimental setup. The eight studies employing a controlled inflicted bruising, consisted of either a dropped metal object onto the forearm of an individual, or by paintballs fired at the upper arm. In both cases, the velocity and impact zone were controlled. The remaining studies examined bruises within clinical settings, where timing (assessment post-trauma), bruise site (area on body) and impact details (velocity) could not be controlled for. Regarding the ALS narrowband used, the majority investigated single wavelengths within the UV and VLS, with one study exploring IR and UV wavelengths in comparison to other imaging modalities in CWL, and another study examined only IR in comparison to CWL imaging techniques. It is worth noting that only one study analyzed fluorescence while the remaining examined absorption under ALS. Diagnostic measurement was considered as: sensitivity – examination was only conducted on injuries in known locations; specificity – examination was conducted on both bruising and non-bruising sites. Based on this criterion, only one study considered specificity in their diagnostic measurement. Four studies reported bruise assessments using descriptors for visibility (e.g., clear, no, bare), two measured bruise size, one, anatomical location and another, the contrast between bruised and non-bruised skin. The remaining studies utilized a novel bruise visibility scale (BVS) and absorption visibility scale (AVS). Two studies examined bruising at a single time point, whereas the remaining eight spanned a period from 30 min post-bruise infliction to four weeks post-bruising. Two studies did not report or consider their sample population skin color, with half of the remaining eight exhibiting representation across six skin categories: “very light,” “light,” “intermediate,” “tan,” “brown,” and “dark”. The remaining 50% had predominantly “white”/“light” sample populations. Table summarizes findings presented in the selected studies. Collectively, the data indicate that among the ten selected studies, eight suggest that ALS is more effective than CWL in detecting and visualizing bruising, particularity mentioning its usefulness during early stages of bruise formation. Analysis reveals that wavelength filter combinations within the IR or UV spectral ranges do not outperform CWL, while narrowband wavelengths within the VLS, specifically 415 nm combined with either longpass or bandpass yellow-cut filters do. Various studies have explored the effectiveness of different single wavelength and filter combinations in detecting and enhancing bruise visibility compared to CWL. Limmen et al. demonstrated that narrowband wavelengths between 400 and 470 nm significantly increased visibility compared to CWL, reporting an improved visibility in 52% of bruises that were initially deemed “barely visible” under CWL. These finding are consistent with the known absorption peaks of oxyhemoglobin (415 nm), de-oxygemoglobin (430 nm), and bilirubin (460 nm) . Despite the declining frequency of visible observations with increasing skin pigmentation , wavelengths of 415 nm and 450 nm (paired with a yellow filter) exhibited the highest rates of bruise detection across all skin categories (415 nm: 11.2%; 450 nm: 11.1%), with 415 nm/yellow filter being the only combination that outperformed CWL in cases where skin colour was classed as “brown” or “dark” . Although the ability to detect bruises decreases over time, results from the selected literature implies that bruising may be detected and visualized sooner following trauma with an ALS than with CWL . Scafide et al. identified bruising in 98% of cases within the initial three days post-trauma when employing 415 nm /yellow filter combination, whereas only 24% were detectable under CWL. Although the use of IR was proposed to be marginally superior to CWL during bruise formation in Black et al. , no statistically significant difference was observed between the methods. Findings are similar to that reported by Trefan et al. , though IR imaging was noted to produce smaller bruise sizes compared to CWL imaging. The time frame for when ALS is more effective than CWL appears to be constrained at both ends, as studies suggest that CWL is better within the initial 30 min post-trauma and at earliest after two days post-trauma . Though further investigations are needed, as findings reported are contrasting. For instance, Nijs et al. found no significance between bruise visibility under ALS and CWL seven days post-trauma using 415 nm/yellow filter combinations while findings by Scafide et al. noted that the 450 nm /yellow filter consistently outperformed CWL in detecting bruises within a 4 week period post-injury. However, differences in analysis may account for these differences as Nijs et al. examined bruise visibility and Scafide et al. bruise detection. Nevertheless, the proposed time frame may explain why ALS performed better than CWL in the study by Limmen et al. , where the average time between injury and ALS examination was 2.6 days. The quantification of the visual degree of bruising conducted initially by Nijs et al. . expressed between one (very bad) and ten (excellent), circumvents subjective visibility descriptors such as “obvious,” “clear,” “distinct,” “faded,” and “faint”. Scafide et al. further developed this quantitative BVS, and suggesting that visibility should not be measured using the same scale for both CWL and ALS, since CWL includes the entire VLS and ALS only a narrow bandwidth. This may explain why bruises of low contrast, i.e. difficult to distinguish from surrounding skin, are more diffuse and less distinctive using IR and UV light . Scafide et al. therefore proposed a tailored BVS, referred to as the AVS when using ALS. When scales were compared, a greater bruise size was associated with higher visibility using either scale but that greater contrast in color or lightness was associated with higher BVS values alone . Future studies should therefore consider the use of the AVS to provide more unity between investigations and comparable results. Unlike traditional forensic medicine that often relies on singular observations during autopsies, research within clinical forensic medicine benefits from being able to employ experimental study designs akin to those used in clinical trials. For instance, the majority of research investigating the effectiveness of ALS compared to CWL, involve randomized study populations, controlled bruise inflictions, and examination strategies using multiple contact points with blinded assessments. From an initial search encompassing 4055 records, ten articles were identified to meet the specified inclusion and exclusion criteria post screening. Data extracted from the selected studies indicate that employing a 415 nm ALS combined with a yellow bandpass/longpass filter outperforms CWL in both bruise detection and visualization. While research in this area is restricted to a single study, findings demonstrate that the 415 nm/yellow filter combination also performs better than CWL and other narrowband wavelengths when assessing bruises in individuals with darker skin tones. However, this is provided the location of a trauma is known. Only a single study compared the ability of ALS to discern bruising from non-bruising, with results indicating that caution is warranted if examining fluorescence . Previous studies have raised concerns regarding the specificity of ALS in detecting bruising . The chart review by Holbrook and Jackson showcased an impressive capability of ALS to detect bruises, identifying bruising in 98% of reported cases of strangulation, wherein 93% displayed no apparent injuries under CWL examination. This highlighted the use of ALS as a compelling tool for bruise detection, with the findings presented in legal proceedings . However, the absence of controls specifically addressing bruise validity limits the results , as ascertaining what the authors’ identified as bruising is perplexing, since neither hemoglobin nor bilirubin exhibit significant fluorescent properties, and skin may fluorescence from factors other than bruising . Further investigations by Lombardi et al. revealed that a CWL had a significantly greater specificity compared to fluorescence under ALS. Authors concluded that the diagnostic reliability of fluorescence under ALS remains uncertain if bruising cannot be validated, and further investigation examining the specificity of absorption is necessary. Debatably, Lombardi et al. presentation of results by pooling wavelengths into a single sensitivity and specificity measure may be deemed inaccurate, as data from individual wavelengths do indeed exhibit higher sensitivity and specificity than CWL at various time points during the course of the experiment. Nevertheless, to alleviate problems associated with the lack of specificity in routine casework, ALS examinations should always be conducted in conjunction with CWL. This approach facilitates the evaluation of additional factors including pain, swelling, and the patient’s history of physical trauma to validate bruising. Moreover, common over-the-counter topical products have demonstrated to generate greater ALS absorption when applied on light or medium skin tones compared to those with dark skin . One makeup product consistently absorbed wavelengths between 310 and 535 nm in 80.9% of observations, and sunscreen (SPF30) absorbed significant light in 7% of cases. However, the remaining twelve products tested absorbed light in less than 1% of observations . In a follow up study evaluating the effectiveness of three different topical product removal methods (soap and water, isopropyl alcohol swab, makeup removal wipe), four out of 14 products continued to exhibit significant absorption after removal . No differences were noted between removal methods, highlighting that further research exploring the specificity of ALS and topical products post-inflicted trauma is warranted, alongside studies questions relating to how previous wounds/scar-tissue, tattoos, moles (including Mongolian spots) and freckles affect specificity. Thus, live ALS examination is therefore advocated to ensure suspected bruises can be washed to mitigate any unknown risk of interference . Relying solely on ALS and CWL photography for bruise examination may overlook such elements. Research on the ability of ALS to detect and visualize bruising across varying skin pigmentations is sparse. Although Lombardi et al. disclosed that subjects were recruited regardless of race, only a small fraction exhibited dark skin pigmentation. The majority of the selected studies examined white/light populations. Of the ten studies reviewed, only the study series by Scafide et al. has addressed equal representation across skin categories determined by spectrophotometry. Scafide et al. found that the wavelengths 415 nm and 450 nm, when paired with yellow-cut filters, were consistently better than other wavelengths at bruise detection for all tested skin categories. UV was less effective than CWL in identifying bruising across darker skin tones, except in individuals with very light skin, which may be due to melanin’s peak absorption wavelength around 335 nm . On the other hand, hemoglobin’s absorption spectra typically exhibits a sharp peak at around 415 nm (dependent on oxygenation level) and most probably accounts for why the wavelength was most effective . Although Scafide et al. initially advocated the use of yellow or orange filters, subsequent analysis using the developed AVS , determined that yellow alone was more effective . Although results are in contrast to findings by Sully et al. who suggest that longer wavelengths combined with orange filters are superior in dark skin, the use of a goat model with topically applied melanin could have resulted in higher pigment concentrations than that of human skin and may account for differences observed. Additional studies are needed for further confirmation. Furthermore, it should be noted that all ten studies examined bruising on extremities. The location of injury has demonstrated to have a significant impact on bruising manifestation and by extension, detection and visibility. For example, the presence of loose subcutaneous tissues increases the risk of blood extravasation, leading to more pronounced bruising around specific regions such as the eye compared to the hand . Subpopulations such as children and the elderly are more susceptible to bruising than young and physically fit individuals . Additionally, individuals with conditions like hypertension, diabetes, and coagulation disorders are also more prone to exhibit different bruising patterns. Certain steroids have been observed to affect the rate of bruising development , and common medications such as anticoagulants can influence both the formation and resolution of bruises, which can manifest immediately, or take longer to develop . Hence, results from the selected studies are constrained by the possibility that the data may not extend to injuries sustained on the torso, face/neck, and genital regions. In practice, medical history may not always be considered prior an ALS assessment, and further studies are warranted to address such injury mechanisms and locations. While ALS research has primarily focused on assessing the technology’s capacity to detect and visualize bruising for enhanced documentation of blunt-force trauma for legal purposes, an ethical dilemma emerges regarding a potential for overinterpretation of injury mechanisms. Although this discussion falls beyond the scope of this review, it warrants attention for future research to contemplate how enhanced visualization of bruises could inadvertently mislead legal professionals lacking medical and technical expertise. For instance, an increased visualization could result in an overestimation of injury severity or mechanism of injury, leading to erroneous judgments and unjust outcomes in legal proceedings. Hence, forensic and legal experts must exercise caution and thoroughness when interpreting and communicating ALS bruising evidence, particularly if relying solely on photographs. Limitations of study This review faces several limitations stemming from predetermined constraints dictated by the nature of systematic reviews and the narrow research question. While studies examining both specificity and sensitivity were included, the strict criteria resulted in a restricted pool of eligible studies. Consequently, only ten studies were deemed suitable, with only a single addressing specificity. This selection bias should be considered when interpreting the review’s outcomes, as while ALS outperforms CWL in bruise detection and visualization, studies have only considered the technology where bruise location is known. In cases where a bruise cannot be validated either by CWL or other methods, ALS should be used with caution, as studies do not sufficiently address specificity. It should also be mentioned that five out of the ten selected studies were authored by the same research team, four of which were derived from the same primary dataset. Such pseudoreplication of findings, albeit presented from varying perspectives, may be argued to pose a limitation to this review and the wider research domain. This review faces several limitations stemming from predetermined constraints dictated by the nature of systematic reviews and the narrow research question. While studies examining both specificity and sensitivity were included, the strict criteria resulted in a restricted pool of eligible studies. Consequently, only ten studies were deemed suitable, with only a single addressing specificity. This selection bias should be considered when interpreting the review’s outcomes, as while ALS outperforms CWL in bruise detection and visualization, studies have only considered the technology where bruise location is known. In cases where a bruise cannot be validated either by CWL or other methods, ALS should be used with caution, as studies do not sufficiently address specificity. It should also be mentioned that five out of the ten selected studies were authored by the same research team, four of which were derived from the same primary dataset. Such pseudoreplication of findings, albeit presented from varying perspectives, may be argued to pose a limitation to this review and the wider research domain. Conclusively, results from this systematic review indicate that ALS is more effective than CWL in detecting and visualizing bruising. Analysis reveals that wavelength filter combinations within the IR or UV spectral ranges do not outperform CWL, while wavelengths within the VLS, specifically 415 nm with either long/bandpass yellow filters do, across differing categories of skin color. These results however, only address the sensitivity of ALS, and can only be considered valid when the location of a bruise is known. Although only a limited number of studies exist, most employ experimental designs that deliver high-quality data due to their randomization and controlled bruise infliction processes. Further investigations of comparable rigor are imperative, ideally conducted by a greater diversity of research teams. These studies should delve into questions concerning specificity, encompassing the impacts of topical products, a range of injury mechanisms, and repercussions on different anatomical regions. Moreover, the ethical quandary surrounding potential pitfalls stemming from the overinterpretation of visually enhanced data will demand careful consideration in the future, particularly as digital imaging methods become more autonomic. Below is the link to the electronic supplementary material. Supplementary Material 1
A case series on the basic concept and design of removable partial dentures: support and bracing considerations
9daf9b6c-eb1d-40f3-ab30-61d8e45a4bad
11780901
Dentistry[mh]
In tooth-tissue-supported removable partial dentures, the attachment of the denture to the abutment teeth results in varying amounts of tissue displacement in the periodontal ligament and the residual ridge under the denture base. An even distribution of occlusal pressure across the denture is essential during function. Achieving a harmonious and simultaneous occlusal contact relationship between the denture and the remaining teeth, including the abutment tooth, requires an environment that can compensate for the differences in tissue displacement between the abutment tooth and the supporting elements of the denture . In addition, the denture must have the ability to resist occlusal pressure to suppress sinking (support), lateral force to prevent lateral movement (bracing), and vertical force to prevent surfacing (retention) . It is necessary to consider the denture design in terms of support, bracing, and retention . Specifically, the procedure involves the use of rest, denture base, minor connector, proximal plate, major connector, and extracoronal retainer (retentive clasp). As a result, when a functioning removable partial denture is completed, the functional occlusal pressure loading on the denture base is reduced by the supports from the abutment tooth through the occlusal rest and the tissue of the residual ridge . Most clinical cases of removable partial dentures involve extension-base removable partial dentures (tooth-tissue-supported removable partial dentures) . As such, the denture should be designed in a way that allows the full utilization of the elements of the support and bracing action, with attention to the bracing effect resulting from the contact between the axial surface of the abutment tooth and the denture structure. The author believes that this approach can lead to a minimization of denture movement, resulting in greater stability of the dentures. In designing a removable partial denture, support and bracing must be considered as fundamental principles. In several studies on the effects of guiding planes and proximal plates on clasp retention, it has been reported that the retentive force was maintained when both were well adapted . Clinicians must acknowledge that denture movement minimization requires increasing support and bracing action. By restricting the movement of the denture, sufficient retention can be expected without requiring excessive force in the undercut portion of the abutment tooth (frictional control). This study aimed to describe the effect of the support and bracing action on the removable denture design, which is important for denture movement minimization, as well as to explain how these principles can be applied in clinical practice. This study presents six case series of six extension-base removable partial dentures provided from the author’s clinical practice. Therefore, the denture design described in this study is of great importance in pre- and postgraduate dental education, and the author believes that it will be helpful for dental students, interns, or residents. Herein, clinical cases in which it was possible to suppress denture movement by increasing the support and bracing actions of a denture design are described. All the included patients provided written informed consent for the use of personal or clinical details along with any identifying images for publication in this study. In all six cases, no medical, family, psycho-social history, or genetic information was provided. Furthermore, no abnormalities were observed in the periodontal tissue based on the periodontal pocket probing, tooth mobility test, or bleeding on probing of the remaining mandibular teeth and residual mucous membrane. At each visit, intervention adherence and tolerability in terms of denture comfort and problems were verbally evaluated. Subsequently, denture occlusion and denture basal surface conformity tests were conducted each time to check for any abnormalities. Moreover, oral hygiene inspections and maintenance as well as masticatory function tests were carried out to confirm that the patients were able to eat without any problems. The patient was being followed up for maintenance, and no problems have been noted. Case 1 This case is categorized as Kennedy Class I. A 51-year-old woman visited the university dental hospital with a chief complaint of masticatory disturbance due to inadequately fitting, unstable dentures. An inadequately fitting bridge was attached from the first premolar on the right side of the mandible to the second premolar on the left side as abutment teeth. Radiological examination (orthopantomogram) confirmed the inadequately fitting bridge; there were no abnormal findings in the edentulous ridge. Therefore, the patient was diagnosed with masticatory disorder due to inadequately fitting mandibular dentures and inadequately fitting bridge. Prosthetic treatment of the inadequately fitting bridge and mandibular denture was planned. A porcelain-fused-to-metal restoration, on which rest seats were applied, was used as the abutment tooth to make a bridge (Fig. a). After the successful placement of the mandibular bridge on the abutment teeth, a functional impression was taken, incorporating the border molding of the mandibular extension-base removable partial denture. Concurrently, the bridge was secured, and a precise working model was developed. The mandibular major connector was designed as a lingual plate, with rest proximal plate I bar (RPI) clasps as direct retainers on the right side of the mandibular first premolar and left side of the mandibular second premolar as abutment teeth (Fig. a, c). A rest was also applied to the left first premolar as an indirect retainer (Fig. a, c). For the artificial teeth, composite resin teeth were chosen (Endura posterior; Shofu Japan). The completed final denture is shown after it was inserted and fitted in the mouth (Fig. b, d,e, f). To evaluate the movement of the removable partial dentures, occlusal rests that provide support were installed in four places. On the lingual surface of the removable partial denture, the minor connector, proximal plate, and major connector (lingual plate) are essential for maintaining friction (frictional control) with the axial surface of the abutment tooth. These friction-maintaining effects have made it possible to minimize movement during denture function. The design controls the direction of denture placement/removal by maintaining frictional control. This case was followed up for 10 years after the placement of the dentures. Case 2 This case is categorized as Kennedy Class I. A 67-year-old man visited the university dental hospital with a chief complaint of missing dentures (Fig. a-d). The patient was diagnosed with masticatory disorder in the mandible. Radiological examination (orthopantomogram) confirmed the remaining teeth; there were no abnormal findings in the edentulous ridge. As the treatment was limited by the patient’s wishes, a resin-made mandibular extension-base removable partial denture was designed using acrylic resin. The mandibular right lateral incisor, left canine, and first and second premolars were used as abutment teeth. After applying the guiding plane and rest seat to the abutment teeth, a functional impression with border molding was taken. The completed final denture was a lingual plate major connector made of resin material, wrought wire clasps of the right lateral incisor, double Akers clasps of the left first and second premolars as direct retainers, and embrasure hooks of the left canine as indirect retainers. To correct the occlusal plane of the mandibular left second premolar, a cap clasp made of indirect composite resin was combined with a double Akers clasp that the design controls the direction of denture placement/removal by maintaining frictional control. For the artificial teeth, composite resin teeth were chosen (Endura posterior; Shofu Japan). Furthermore, to prevent bending and breakage of the resin base denture, a reinforcement metal plate (diameter: 3 mm, thickness: 1.5 mm) was inserted to improve the strength. To evaluate the movement of the removable partial dentures, occlusal rests that provide support were installed in four places. On the lingual surface of the removable partial denture, the minor connector, proximal plate, and major connector (a resin lingual plate) are essential for maintaining friction (frictional control) with the axial surface of the abutment tooth. These friction-maintaining effects have made it possible to minimize movement during denture function. The design controls the direction of denture placement/removal by maintaining frictional control. This case was followed up for 8 years after the placement of the dentures. Case 3 This case is categorized as Kennedy Class II-1. A 54-year-old man visited the university dental hospital with a chief complaint of masticatory disturbance due to inadequately fitting, unstable dentures. Inadequately fitting crowns were observed in the mandibular right second molar, first premolar, canine, lateral and central incisors, and left central incisor, which were confirmed by radiological examination (orthopantomogram). Nevertheless, there were no abnormal findings in the edentulous ridge. Therefore, the patient was diagnosed with masticatory disorder due to inadequately fitting mandibular dentures and inadequately fitting crown. A new mandibular denture and connecting resin-veneered restoration was planned with milling on the axial lingual surface. A crown and fused crown with milling were fabricated, to which the guiding plane was applied as the abutment tooth. After the mandibular and fused crowns were applied to the abutment teeth, a functional impression with border molding of the extension-base removable partial denture was taken. Concurrently, a definitive cast of the mandibular and fused crowns was made. The mandibular major connector was designed as a Kennedy bar, and retentive arms were intended for the right second and first premolars as direct retainers. Furthermore, retentive arms were designed for the right canine as indirect retainers. The design controls the direction of denture placement/removal by maintaining frictional control. For the artificial teeth, composite resin teeth were selected (Endura anterior, posterior; Shofu Japan). The completed final denture inserted and fitted in the mouth is presented in Fig. a-d. To evaluate the movement of the removable partial dentures, occlusal rests that provide support were installed in three places. On the lingual surface of the removable partial denture, the proximal plate and the design of a Kennedy bar made of metal with a milling technique applied to the major connector are essential for maintaining friction (frictional control) with the axial surface of the abutment tooth. These friction-maintaining effects have made it possible to minimize movement during denture function. The design controls the direction of denture placement/removal by maintaining frictional control. This case was followed up for 10 years after the placement of the dentures. Case 4 This case is categorized as Kennedy Class I. A 72-year-old woman visited the university dental hospital with a chief complaint of masticatory disturbance due to inadequately fitting, unstable maxillary dentures. There were no abnormal findings in the crown fitted from the right maxillary to the left canine. Radiological examination (intraoral radiographic image, orthopantomogram) confirmed the remaining teeth; there were also no abnormal findings in the edentulous ridge. Therefore, the patient was diagnosed with masticatory disorder due to inadequately fitting dentures in the maxilla. Prosthetic treatment of the inadequately fitting maxillary denture was planned. After adding a guiding plane and cingulum rest seat for the canines on both sides, a functional impression with border molding of the extension-base removable partial denture was taken. The maxillary major connector was designed as a palatal plate, with RPI clasps having cingulum rests on the canines as direct retainers on both sides. These clasps were installed to provide frictional retention on the lingual and distal axial surfaces of the abutment teeth. The design controls the direction of denture placement/removal by maintaining frictional control. For the artificial teeth, composite resin teeth were chosen (Endura posterior; Shofu Japan). The completed final denture inserted and fitted in the mouth is presented in Fig. a-d. To evaluate the movement of the removable partial dentures, the cingulum rests that provide support were installed in two places. On the palatal surface of the removable partial denture, the proximal plate and the major connector (palatal plate) are essential for maintaining friction (frictional control) with the axial surface of the abutment tooth. These friction-maintaining effects have made it possible to minimize movement during denture function. The design controls the direction of denture placement/removal by maintaining frictional control. This case was followed up for 10 years after the placement of the dentures. Case 5 This case is categorized as Kennedy Class II. An 82-year-old woman visited the university dental hospital with a chief complaint of masticatory disturbance due to inadequately fitting, unstable maxillary dentures after tooth extraction in the right maxillary molar region. The patient was also concerned about the appearance of an incompatible left central and lateral incisor crown. Radiological examination (intraoral radiographic image, orthopantomogram) confirmed the remaining teeth; there were no abnormal findings in the edentulous ridge. Therefore, the patient was diagnosed with masticatory disorder due to missing maxillary right posterior tooth and aesthetic disorder due to an inadequately fitting maxillary crown. Prosthetic treatment of the inadequately fitting maxillary crown and maxillary dentures was planned. After two ceramic crowns for the maxillary anterior teeth were fabricated, the cingulum rest seat for the maxillary left canine, occlusal rest seat for the distal second premolar, and occlusal rest seat for the distal first molar and the mesial second molar were installed. After the completed maxillary crowns were applied to the abutment teeth, a functional impression with border molding of the maxillary extension-base removable partial denture was taken. Concurrently, working models of the crowns were developed. The maxillary major connector was designed as a palatal plate and was used to continuously extend to come into contact with the axial surface of the residual tooth. Cingulum rests for the left central and lateral incisors as direct retainers, Akers clasp with cingulum rest for the left canine as indirect retainer, Akers clasp with occlusal rest for the distal second premolar as indirect retainer, and double Akers clasps with occlusal rests for the distal first molar and mesial second molar as indirect retainers were installed. The design controls the direction of denture placement/removal by maintaining frictional control. For the artificial teeth, composite resin teeth were chosen (Endura anterior, posterior; Shofu Japan). The completed final denture inserted and fitted in the mouth is presented in Fig. a-d. To evaluate the movement of the removable partial dentures, the cingulum and occlusal rests that provide support were installed in four places. On the palatal surface of the removable partial denture, the minor connector, proximal plate, and major connector (metal palatal plate continuously extended to come into contact with the axial surface of the abutment tooth) are essential for maintaining friction (frictional control) with the axial surface of the abutment tooth. These friction-maintaining effects have made it possible to minimize movement during denture function. The design controls the direction of denture placement/removal by maintaining frictional control. This case was followed up for 6 years after the placement of the dentures. Case 6 This case is categorized as Kennedy Class I-1. A 55-year-old man visited the university dental hospital with a chief complaint of masticatory disturbance due to inadequately fitting, unstable dentures. An inadequately fitting crown was attached to the maxillary right first molar, and an inadequately fitting bridge with the maxillary left canine, first premolar, and first molar as abutment teeth was observed. Furthermore, the bilateral maxillary tubercles were remarkably bulging, and it was difficult to extend the denture base. Radiological examination (intraoral radiographic image, orthopantomogram) confirmed an inadequately fitting bridge; there were no abnormal findings in the bone quality of the edentulous alveolar ridge. Therefore, the patient was diagnosed with masticatory disorder due to inadequately fitting maxillary dentures and inadequately fitting maxillary crown and bridge. A new maxillary denture, a right maxillary complete metal crown with milling on the axial lingual surface, and a left maxillary porcelain-fused-to-metal restoration with milling on the axial lingual surface were planned. A crown and a bridge, milled with a guiding plane applied to the abutment tooth, was fabricated. After the placement of the maxillary crown and bridge on the abutment teeth, a functional impression with border molding was taken. Concurrently, the crown and bridge were secured, and a precise working model was developed. The maxillary major connector was designed as a palatal plate, with Akers clasps for the right molars as direct retainers, a cingulum rest for the left canine as a direct retainer, and Akers clasp for the left first premolar as an indirect retainer and that for the first molar as a direct retainer (Fig. a, c). The design controls the direction of denture placement/removal by maintaining frictional control. For the artificial teeth, composite resin teeth were chosen (Endura anterior, posterior; Shofu Japan). The completed final denture inserted and fitted in the mouth is presented in Fig. a-d. To evaluate the movement of the removable partial dentures, the cingulum and occlusal rests that provide support were installed in five places. On the palatal surface of the removable partial denture, the minor connector, proximal plate, and major connector (metal palatal plate with a milling technique applied to the major connector) are essential for maintaining friction (frictional control) with the abutment tooth milled on the lingual and mesiodistal axial surface. These friction-maintaining effects have made it possible to minimize movement during denture function. The design controls the direction of denture placement/removal by maintaining frictional control. This case was followed up for 10 years after the placement of the dentures. This case is categorized as Kennedy Class I. A 51-year-old woman visited the university dental hospital with a chief complaint of masticatory disturbance due to inadequately fitting, unstable dentures. An inadequately fitting bridge was attached from the first premolar on the right side of the mandible to the second premolar on the left side as abutment teeth. Radiological examination (orthopantomogram) confirmed the inadequately fitting bridge; there were no abnormal findings in the edentulous ridge. Therefore, the patient was diagnosed with masticatory disorder due to inadequately fitting mandibular dentures and inadequately fitting bridge. Prosthetic treatment of the inadequately fitting bridge and mandibular denture was planned. A porcelain-fused-to-metal restoration, on which rest seats were applied, was used as the abutment tooth to make a bridge (Fig. a). After the successful placement of the mandibular bridge on the abutment teeth, a functional impression was taken, incorporating the border molding of the mandibular extension-base removable partial denture. Concurrently, the bridge was secured, and a precise working model was developed. The mandibular major connector was designed as a lingual plate, with rest proximal plate I bar (RPI) clasps as direct retainers on the right side of the mandibular first premolar and left side of the mandibular second premolar as abutment teeth (Fig. a, c). A rest was also applied to the left first premolar as an indirect retainer (Fig. a, c). For the artificial teeth, composite resin teeth were chosen (Endura posterior; Shofu Japan). The completed final denture is shown after it was inserted and fitted in the mouth (Fig. b, d,e, f). To evaluate the movement of the removable partial dentures, occlusal rests that provide support were installed in four places. On the lingual surface of the removable partial denture, the minor connector, proximal plate, and major connector (lingual plate) are essential for maintaining friction (frictional control) with the axial surface of the abutment tooth. These friction-maintaining effects have made it possible to minimize movement during denture function. The design controls the direction of denture placement/removal by maintaining frictional control. This case was followed up for 10 years after the placement of the dentures. This case is categorized as Kennedy Class I. A 67-year-old man visited the university dental hospital with a chief complaint of missing dentures (Fig. a-d). The patient was diagnosed with masticatory disorder in the mandible. Radiological examination (orthopantomogram) confirmed the remaining teeth; there were no abnormal findings in the edentulous ridge. As the treatment was limited by the patient’s wishes, a resin-made mandibular extension-base removable partial denture was designed using acrylic resin. The mandibular right lateral incisor, left canine, and first and second premolars were used as abutment teeth. After applying the guiding plane and rest seat to the abutment teeth, a functional impression with border molding was taken. The completed final denture was a lingual plate major connector made of resin material, wrought wire clasps of the right lateral incisor, double Akers clasps of the left first and second premolars as direct retainers, and embrasure hooks of the left canine as indirect retainers. To correct the occlusal plane of the mandibular left second premolar, a cap clasp made of indirect composite resin was combined with a double Akers clasp that the design controls the direction of denture placement/removal by maintaining frictional control. For the artificial teeth, composite resin teeth were chosen (Endura posterior; Shofu Japan). Furthermore, to prevent bending and breakage of the resin base denture, a reinforcement metal plate (diameter: 3 mm, thickness: 1.5 mm) was inserted to improve the strength. To evaluate the movement of the removable partial dentures, occlusal rests that provide support were installed in four places. On the lingual surface of the removable partial denture, the minor connector, proximal plate, and major connector (a resin lingual plate) are essential for maintaining friction (frictional control) with the axial surface of the abutment tooth. These friction-maintaining effects have made it possible to minimize movement during denture function. The design controls the direction of denture placement/removal by maintaining frictional control. This case was followed up for 8 years after the placement of the dentures. This case is categorized as Kennedy Class II-1. A 54-year-old man visited the university dental hospital with a chief complaint of masticatory disturbance due to inadequately fitting, unstable dentures. Inadequately fitting crowns were observed in the mandibular right second molar, first premolar, canine, lateral and central incisors, and left central incisor, which were confirmed by radiological examination (orthopantomogram). Nevertheless, there were no abnormal findings in the edentulous ridge. Therefore, the patient was diagnosed with masticatory disorder due to inadequately fitting mandibular dentures and inadequately fitting crown. A new mandibular denture and connecting resin-veneered restoration was planned with milling on the axial lingual surface. A crown and fused crown with milling were fabricated, to which the guiding plane was applied as the abutment tooth. After the mandibular and fused crowns were applied to the abutment teeth, a functional impression with border molding of the extension-base removable partial denture was taken. Concurrently, a definitive cast of the mandibular and fused crowns was made. The mandibular major connector was designed as a Kennedy bar, and retentive arms were intended for the right second and first premolars as direct retainers. Furthermore, retentive arms were designed for the right canine as indirect retainers. The design controls the direction of denture placement/removal by maintaining frictional control. For the artificial teeth, composite resin teeth were selected (Endura anterior, posterior; Shofu Japan). The completed final denture inserted and fitted in the mouth is presented in Fig. a-d. To evaluate the movement of the removable partial dentures, occlusal rests that provide support were installed in three places. On the lingual surface of the removable partial denture, the proximal plate and the design of a Kennedy bar made of metal with a milling technique applied to the major connector are essential for maintaining friction (frictional control) with the axial surface of the abutment tooth. These friction-maintaining effects have made it possible to minimize movement during denture function. The design controls the direction of denture placement/removal by maintaining frictional control. This case was followed up for 10 years after the placement of the dentures. This case is categorized as Kennedy Class I. A 72-year-old woman visited the university dental hospital with a chief complaint of masticatory disturbance due to inadequately fitting, unstable maxillary dentures. There were no abnormal findings in the crown fitted from the right maxillary to the left canine. Radiological examination (intraoral radiographic image, orthopantomogram) confirmed the remaining teeth; there were also no abnormal findings in the edentulous ridge. Therefore, the patient was diagnosed with masticatory disorder due to inadequately fitting dentures in the maxilla. Prosthetic treatment of the inadequately fitting maxillary denture was planned. After adding a guiding plane and cingulum rest seat for the canines on both sides, a functional impression with border molding of the extension-base removable partial denture was taken. The maxillary major connector was designed as a palatal plate, with RPI clasps having cingulum rests on the canines as direct retainers on both sides. These clasps were installed to provide frictional retention on the lingual and distal axial surfaces of the abutment teeth. The design controls the direction of denture placement/removal by maintaining frictional control. For the artificial teeth, composite resin teeth were chosen (Endura posterior; Shofu Japan). The completed final denture inserted and fitted in the mouth is presented in Fig. a-d. To evaluate the movement of the removable partial dentures, the cingulum rests that provide support were installed in two places. On the palatal surface of the removable partial denture, the proximal plate and the major connector (palatal plate) are essential for maintaining friction (frictional control) with the axial surface of the abutment tooth. These friction-maintaining effects have made it possible to minimize movement during denture function. The design controls the direction of denture placement/removal by maintaining frictional control. This case was followed up for 10 years after the placement of the dentures. This case is categorized as Kennedy Class II. An 82-year-old woman visited the university dental hospital with a chief complaint of masticatory disturbance due to inadequately fitting, unstable maxillary dentures after tooth extraction in the right maxillary molar region. The patient was also concerned about the appearance of an incompatible left central and lateral incisor crown. Radiological examination (intraoral radiographic image, orthopantomogram) confirmed the remaining teeth; there were no abnormal findings in the edentulous ridge. Therefore, the patient was diagnosed with masticatory disorder due to missing maxillary right posterior tooth and aesthetic disorder due to an inadequately fitting maxillary crown. Prosthetic treatment of the inadequately fitting maxillary crown and maxillary dentures was planned. After two ceramic crowns for the maxillary anterior teeth were fabricated, the cingulum rest seat for the maxillary left canine, occlusal rest seat for the distal second premolar, and occlusal rest seat for the distal first molar and the mesial second molar were installed. After the completed maxillary crowns were applied to the abutment teeth, a functional impression with border molding of the maxillary extension-base removable partial denture was taken. Concurrently, working models of the crowns were developed. The maxillary major connector was designed as a palatal plate and was used to continuously extend to come into contact with the axial surface of the residual tooth. Cingulum rests for the left central and lateral incisors as direct retainers, Akers clasp with cingulum rest for the left canine as indirect retainer, Akers clasp with occlusal rest for the distal second premolar as indirect retainer, and double Akers clasps with occlusal rests for the distal first molar and mesial second molar as indirect retainers were installed. The design controls the direction of denture placement/removal by maintaining frictional control. For the artificial teeth, composite resin teeth were chosen (Endura anterior, posterior; Shofu Japan). The completed final denture inserted and fitted in the mouth is presented in Fig. a-d. To evaluate the movement of the removable partial dentures, the cingulum and occlusal rests that provide support were installed in four places. On the palatal surface of the removable partial denture, the minor connector, proximal plate, and major connector (metal palatal plate continuously extended to come into contact with the axial surface of the abutment tooth) are essential for maintaining friction (frictional control) with the axial surface of the abutment tooth. These friction-maintaining effects have made it possible to minimize movement during denture function. The design controls the direction of denture placement/removal by maintaining frictional control. This case was followed up for 6 years after the placement of the dentures. This case is categorized as Kennedy Class I-1. A 55-year-old man visited the university dental hospital with a chief complaint of masticatory disturbance due to inadequately fitting, unstable dentures. An inadequately fitting crown was attached to the maxillary right first molar, and an inadequately fitting bridge with the maxillary left canine, first premolar, and first molar as abutment teeth was observed. Furthermore, the bilateral maxillary tubercles were remarkably bulging, and it was difficult to extend the denture base. Radiological examination (intraoral radiographic image, orthopantomogram) confirmed an inadequately fitting bridge; there were no abnormal findings in the bone quality of the edentulous alveolar ridge. Therefore, the patient was diagnosed with masticatory disorder due to inadequately fitting maxillary dentures and inadequately fitting maxillary crown and bridge. A new maxillary denture, a right maxillary complete metal crown with milling on the axial lingual surface, and a left maxillary porcelain-fused-to-metal restoration with milling on the axial lingual surface were planned. A crown and a bridge, milled with a guiding plane applied to the abutment tooth, was fabricated. After the placement of the maxillary crown and bridge on the abutment teeth, a functional impression with border molding was taken. Concurrently, the crown and bridge were secured, and a precise working model was developed. The maxillary major connector was designed as a palatal plate, with Akers clasps for the right molars as direct retainers, a cingulum rest for the left canine as a direct retainer, and Akers clasp for the left first premolar as an indirect retainer and that for the first molar as a direct retainer (Fig. a, c). The design controls the direction of denture placement/removal by maintaining frictional control. For the artificial teeth, composite resin teeth were chosen (Endura anterior, posterior; Shofu Japan). The completed final denture inserted and fitted in the mouth is presented in Fig. a-d. To evaluate the movement of the removable partial dentures, the cingulum and occlusal rests that provide support were installed in five places. On the palatal surface of the removable partial denture, the minor connector, proximal plate, and major connector (metal palatal plate with a milling technique applied to the major connector) are essential for maintaining friction (frictional control) with the abutment tooth milled on the lingual and mesiodistal axial surface. These friction-maintaining effects have made it possible to minimize movement during denture function. The design controls the direction of denture placement/removal by maintaining frictional control. This case was followed up for 10 years after the placement of the dentures. Support action To alleviate the impact of occlusal loading on the abutment teeth due to the pressure exerted on the denture base during function, it is important to minimize the soft-tissue displacement beneath the denture base during occlusion and reduce the difference between the displacement under the denture base and the abutment teeth . Through this, simultaneous occlusal contact between the denture base and the abutment teeth can be achieved during function (Fig. ). As a result, it is possible to disperse the functional occlusal force so that the transmission during function is in the axial direction of the abutment tooth and the vertical to the alveolar ridge mucosa (Figs. , , , , and ). To alleviate the impact of occlusal loading on the abutment teeth due to the pressure exerted on the denture base during function, it is important to minimize the soft-tissue displacement beneath the denture base during occlusion and reduce the difference between the displacement under the denture base and the abutment teeth . Through this, simultaneous occlusal contact between the denture base and the abutment teeth can be achieved during function (Fig. ). As a result, it is possible to disperse the functional occlusal force so that the transmission during function is in the axial direction of the abutment tooth and the vertical to the alveolar ridge mucosa (Figs. , , , , and ). Maintaining friction on parallel surfaces (frictional control) To minimize denture movement, it is essential to utilize the bracing action by establishing contact between the axial surface of the abutment teeth (guiding plane) and the minor connector, proximal plate, and major connector between the lingual and palate axial surfaces of the abutment teeth . In addition, it is necessary to use bracing clasps designed for effective bracing action. Figure (a, c) presents a clinical case (surveyed crown) where a part of the crown contour was processed using a milling device to ensure parallel alignment of the lingual and mesio-distal axial surfaces of each abutment tooth of the maxillary dentition . Multiple rests and guiding planes were set parallel to the direction of denture placement/removal to regulate its movement direction . This improves the bracing action through contact with the axial surface of the abutment tooth and multiple rests to ensure denture stabilization during function . Among the denture components, including Figs. , , , , and of clinical cases, the minor connector and proximal plate provide frictional control by maintaining contact with the guiding plane on the axial surface of the abutment teeth, which in turn controls the movement direction and minimizes movement during denture function. Furthermore, by restricting the direction of placement/removal (path of placement [or insertion] and removal) , the denture becomes less likely to be lifted and separated (Fig. a, b,c). Figure (a, b,c, d) shows an example of a design that controls the direction of denture placement/removal by maintaining friction on parallel surfaces (frictional control). The abutment tooth side is a surveyed crown designed with a guide plane and mesial rest (Fig. a) to maintain friction on surfaces parallel to the denture components (minor connector, proximal plate, and major connector) (Fig. c). The denture design incorporates the concept of structural design (Fig. c), with the major connector designed as a lingual plate due to the distance from the gingival margins to the floor of the mouth being less than 7 mm (Fig. a, c). The continuous contact of the major connector with the tooth surface regulates the direction of denture placement/removal, as can be seen from the lingual view (Fig. d, f). Reciprocation Figure presents three cases of reciprocation in the retentive and bracing clasps (reciprocal clasps). The bracing clasp opposing the retentive clasp on the buccal side can be designed to have a platelike shape with vertical width, as shown in Fig. a, to improve the bracing action and suppress the lateral movement of the denture. Similarly, a plate-shaped bracing clasp with a vertical width can be designed in combination with the denture base (metal based or acrylic resin based) to suppress lateral movement, as shown in Fig. b. From the viewpoint of the vertical and lateral reciprocation of denture placement/removal, the bracing clasp is desirable in the platelike shape with a vertical width . Furthermore, the bracing clasp can be designed as a lingual plate, the major connector illustrated in Fig. c, that opposes the buccal retentive clasp and extends on the lingual axial surface toward the occlusal side. Therefore, including Figs. , , , , and of clinical cases, it is necessary for the retentive and bracing clasp arms to simultaneously come into contact with the axial surface to protect the abutment tooth and prevent buccolingual movement . This phenomenon, known as reciprocation, is an important design requirement for clasp elements and plays a pivotal role during denture placement/removal. Major connectors The major connectors effectively and appropriately distribute the functional load applied on the denture by extensively covering the residual lingual surface (palatal surface). The lingual and palatal plates have a continuous and wide contact with the axial surfaces of the abutment teeth that resist the lateral force applied to the denture, thereby providing effective bracing action. In daily clinical practice, cases with multiple missing teeth in the anterior and molar regions of the maxillary and mandibular arches are frequently encountered (Figs. , , , , and ). In these cases, minimizing denture movement using direct or indirect abutments alone is difficult. Therefore, it is desirable to provide support and bracing action by the major connector . In the mandibular region, as illustrated in Figs. and and a plate-shaped major connector is designed to be continuously extended to come into contact with the axial surface of the remaining tooth. Essentially, the methodology for designing the major connector in the maxillary region is the same as that of the mandibular region. As shown in Figs. and , to prevent lateral movement of the denture and lifting or separation at the posterior end of the denture base, a plate-shaped major connector is used to continuously extend and contact the axial surface of the residual tooth. These are important considerations when designing the form of the major connecters. In addition, as shown in Figs. and , milling on the axial surface of the abutment tooth may enhance the bracing effectiveness of the major connector design in some cases. To minimize denture movement, it is essential to utilize the bracing action by establishing contact between the axial surface of the abutment teeth (guiding plane) and the minor connector, proximal plate, and major connector between the lingual and palate axial surfaces of the abutment teeth . In addition, it is necessary to use bracing clasps designed for effective bracing action. Figure (a, c) presents a clinical case (surveyed crown) where a part of the crown contour was processed using a milling device to ensure parallel alignment of the lingual and mesio-distal axial surfaces of each abutment tooth of the maxillary dentition . Multiple rests and guiding planes were set parallel to the direction of denture placement/removal to regulate its movement direction . This improves the bracing action through contact with the axial surface of the abutment tooth and multiple rests to ensure denture stabilization during function . Among the denture components, including Figs. , , , , and of clinical cases, the minor connector and proximal plate provide frictional control by maintaining contact with the guiding plane on the axial surface of the abutment teeth, which in turn controls the movement direction and minimizes movement during denture function. Furthermore, by restricting the direction of placement/removal (path of placement [or insertion] and removal) , the denture becomes less likely to be lifted and separated (Fig. a, b,c). Figure (a, b,c, d) shows an example of a design that controls the direction of denture placement/removal by maintaining friction on parallel surfaces (frictional control). The abutment tooth side is a surveyed crown designed with a guide plane and mesial rest (Fig. a) to maintain friction on surfaces parallel to the denture components (minor connector, proximal plate, and major connector) (Fig. c). The denture design incorporates the concept of structural design (Fig. c), with the major connector designed as a lingual plate due to the distance from the gingival margins to the floor of the mouth being less than 7 mm (Fig. a, c). The continuous contact of the major connector with the tooth surface regulates the direction of denture placement/removal, as can be seen from the lingual view (Fig. d, f). Figure presents three cases of reciprocation in the retentive and bracing clasps (reciprocal clasps). The bracing clasp opposing the retentive clasp on the buccal side can be designed to have a platelike shape with vertical width, as shown in Fig. a, to improve the bracing action and suppress the lateral movement of the denture. Similarly, a plate-shaped bracing clasp with a vertical width can be designed in combination with the denture base (metal based or acrylic resin based) to suppress lateral movement, as shown in Fig. b. From the viewpoint of the vertical and lateral reciprocation of denture placement/removal, the bracing clasp is desirable in the platelike shape with a vertical width . Furthermore, the bracing clasp can be designed as a lingual plate, the major connector illustrated in Fig. c, that opposes the buccal retentive clasp and extends on the lingual axial surface toward the occlusal side. Therefore, including Figs. , , , , and of clinical cases, it is necessary for the retentive and bracing clasp arms to simultaneously come into contact with the axial surface to protect the abutment tooth and prevent buccolingual movement . This phenomenon, known as reciprocation, is an important design requirement for clasp elements and plays a pivotal role during denture placement/removal. The major connectors effectively and appropriately distribute the functional load applied on the denture by extensively covering the residual lingual surface (palatal surface). The lingual and palatal plates have a continuous and wide contact with the axial surfaces of the abutment teeth that resist the lateral force applied to the denture, thereby providing effective bracing action. In daily clinical practice, cases with multiple missing teeth in the anterior and molar regions of the maxillary and mandibular arches are frequently encountered (Figs. , , , , and ). In these cases, minimizing denture movement using direct or indirect abutments alone is difficult. Therefore, it is desirable to provide support and bracing action by the major connector . In the mandibular region, as illustrated in Figs. and and a plate-shaped major connector is designed to be continuously extended to come into contact with the axial surface of the remaining tooth. Essentially, the methodology for designing the major connector in the maxillary region is the same as that of the mandibular region. As shown in Figs. and , to prevent lateral movement of the denture and lifting or separation at the posterior end of the denture base, a plate-shaped major connector is used to continuously extend and contact the axial surface of the residual tooth. These are important considerations when designing the form of the major connecters. In addition, as shown in Figs. and , milling on the axial surface of the abutment tooth may enhance the bracing effectiveness of the major connector design in some cases. This study presents the principles of the design of removable partial dentures using clinical cases from the author’s clinical practice (Figs. , , , , and ), highlighting the crucial role of support and bracing action in minimizing denture movement. Therefore, the basic procedure for designing dentures is as follows: rest/denture base (support action), followed by the minor connector/proximal plate/major connector (bracing action), and finally, the extracoronal retainer (retentive clasp, retentive action). As demonstrated in the clinical examples presented in this study (Figs. , , , , and ), removable partial dentures with clasps applied to the retainer belong to the category of rigid connections. By enhancing the effect of support and bracing action, it is possible to improve the connection strength and achieve a condition close to a rigid connection, thereby minimizing denture movement. This study highlights the role of a plate-shaped major connector that comes into contact with the axial surface of the abutment tooth. This enhances the bracing action, thereby contributing to denture movement minimization. This approach incorporates the concept of structural design into removable partial denture prostheses. While the paper cited in this study discusses the design of removable partial dentures, few include actual clinical cases such as the ones presented here. Therefore, the author believes that this case series provides valuable insights and is clinically important, potentially enhancing prosthetic treatment for removable partial dentures. When designed to provide sufficient support and bracing action, such dentures exhibit adequate stability in the mouth. The establishment of multiple contact points between the guiding plane of the abutment teeth and the denture components (frictional control) enhances the bracing action. It elucidates the placement/removal direction (path of placement and removal) of dentures (Fig. ). In this study, the author proposes a denture design, that emphasizes bracing action through contact with the axial surface of the abutment tooth, particularly for major connectors and denture bases among denture components with bracing action (Figs. , , , , and ). Contact with the axial surface of the tooth, parallel to the placement/removal direction of dentures, not only suppresses lateral movement when the denture is in a fixed position but also governs the placement/removal direction (path of placement and removal) of dentures . From the perspective of minimizing denture movement, movement direction regulation is essential in denture design. Preventive dentistry is one of the factors to consider when designing removable partial dentures . When designing the mandibular major connector, either the lingual bar or lingual plate should be selected based on the distance from the gingival margin of the remaining tooth to the floor of the mouth . If there is no issue with the position of the gingival margin of the remaining tooth, the lingual bar, which exhibits excellent self-cleaning action, is considered as the first choice. However, there are many clinical cases in which the superior support and bracing action of the lingual plate are more important than the cleanability and self-cleaning action of the lingual bar . Thus, when applying a plate-shaped major connector, it is necessary to ensure that the patient does not have extensive caries or advanced periodontal disease and is capable of reliably and effectively controlling dental and denture plaque during home care maintenance. Furthermore, if it is impossible to secure enough retainers to minimize denture movement or if the residual ridges have poor support, it may be preferable to choose a lingual plate to compensate for these factors (Figs. , and ). In recent years, clinical research using crossover studies with lingual bars and plates has been conducted. The present study elucidates that lingual plates do not directly facilitate the growth of bacteria that cause periodontal disease when oral hygiene management is adequately performed . In addition, a 30-year retrospective cohort study of fitted removable partial denture design at the University of Montreal School of Dentistry found that adequate oral hygiene management and instruction as well as a planned maintenance system ensured appropriate application of the lingual plate and maintenance of good oral health . Based on the evidence presented in this section, the application of plate-shaped major connectors is an effective treatment strategy for removable partial dentures when strengthening of the support and bracing action is the top priority. In the clinical cases presented herein (Figs. , , , , and ), the effect of denture movement suppression was demonstrated in all denture designs. In each case, the author explained the importance of the remaining teeth to the patient and provided instructions on oral hygiene before starting denture treatment. In addition, the author explained the importance and handling of dentures to the patient, and the patients’ understanding was confirmed during treatment and after wearing the dentures; moreover, consent was obtained before the treatment. After the treatment, the patient’s motivation for oral hygiene and the understanding of dentures improved compared with that before treatment, thereby positively impacting the long-term prognosis after wearing the dentures and improving the quality of life . There have also been recent reports that the masticatory function can be improved by prosthodontic rehabilitation wearing removable prosthesis of a hyperhidrotic ectodermal dysplasia patient ; the mixing ability test was capable of detecting improvement in masticatory function with new removable partial dentures (type of denture base: Kennedy Classes I and II) . In the clinical cases in this study (Figs. , , , , and , type of denture base: Kennedy Classes I and II), no problems with masticatory function were recorded in a masticatory ability test using gummy jelly and interviews with the patients. Also, during the follow-up period, the author checked the components of the dentures, condition of the denture basal surface, occlusion, maintenance of motivation for oral hygiene, and maintenance and stability of masticatory function at regular checkups. This study has several limitations that need to be acknowledged. In recent years, with the rise of the superaging society, the number of individuals with remaining teeth has been increasing annually at a considerable rate . Removable partial denture treatment for missing teeth is an essential prosthetic dental treatment in everyday clinical practice, and its demand is expected to further increase . Despite the limitations in using data to elucidate the association between various factors in superaging societies and the importance of removable partial dentures, the findings of this study are of utmost importance. The long-term case in which the patient demonstrated remarkably stable functional and preventive dental outcomes, coupled with the importance of the denture design principles outlined in this study, highlights the value of our work in the prosthetic treatment of removable partial dentures for missing dentition. In the context of this paper, each case will be discussed based on the previously described design principles of the removable partial dentures. However, in today’s superaging society, patient satisfaction with removable partial denture treatment is higher than ever. Thus, there is a strong demand for individualized treatment for each case (number of missing teeth, distribution of missing teeth, properties of remaining tissues, etc.). With the continuously increasing diversity of removable partial dentures, the author believes that their importance will remain the same. To achieve this, the author thinks it is important to emphasize the need to return to the basics of clinical cases and reconsider the fundamentals and design principles of removable partial dentures. Therefore, the author has considered cases involving extension-base removable partial dentures. In addition, the denture design described in this study is significant in pre- and postgraduate dental education, and the author believes that it will be helpful for dental students, interns, or residents in clinical practice. When designing removable partial dentures, maximizing support and bracing action is required to minimize denture movement. This study highlights the importance of support and bracing effects that utilize the contact between the axial surface of the abutment tooth and the denture structure. These can be summarized as follows: 1) Improving the bracing action by contacting the minor connector and proximal plate with the guiding plane set for multiple teeth (frictional control) is crucial. Furthermore, it is important to control the direction of the denture during placement/removal (path of insertion). 2) It is essential to consider the design of major connectors to improve the support and bracing action. Therefore, in prosthetic rehabilitation that involves the use of removable partial dentures, the combined action of support and bracing is required not only for the retainer but also for other denture components, including the denture base and major connector. Moreover, it is important to consider the equitable distribution of force during function. Strategically designed removable partial dentures with frictional control and a defined path of insertion help minimize denture movement and enhance stability.
Follicular dendritic cell sarcoma involving the parotid gland with expression of the melanocytic marker PRAME
05f3b488-34c4-4fb5-99ac-e8643c759aeb
11634961
Anatomy[mh]
Soft tissue sarcomas are rare tumors comprising roughly 1% of malignancies in adults. Despite their rarity, they exhibit a substantial mortality rate, contributing to about 3–4% of cancer-related fatalities each year . FDC sarcoma is an exceedingly uncommon form of sarcoma characterized by its low to intermediate malignant nature. It originates from follicular dendritic cells, yet instances of its occurrence in extranodal locations such as the mediastinum, gastrointestinal tract, liver, and spleen have also been documented . Only four prior occurrences of intra-parotid FDC sarcomas have been documented . Our patient is a 65-year-old male who presented with a right parotid mass and bilateral neck lymphadenopathy. He had an undocumented history of a cutaneous right cheek lesion that was previously biopsied and thought to represent B-cell lymphoma. On examination, there was a firm mass at the right parotid tail. There was also a palpable right neck lymphadenopathy in level 2A and level 3. Magnetic resonance imaging (MRI) of the neck showed a 5.1 × 4.5 × 8.3 cm enhancing heterogeneous T2 hyperintense lesion involving the right superficial parotid gland. Initially, an ultrasound-guided core biopsy was performed, which showed a poorly differentiated neoplasm, suggestive of FDC sarcoma. The patient then underwent right total parotidectomy and bilateral neck dissection. Sections of the parotid mass showed an infiltration of large cells with irregular nuclei, vesicular chromatin, prominent nucleoli, and moderate cytoplasm. A subset of the cells showed atypia with enlarged, highly irregular, and hyperchromatic nuclei. The malignant cells expressed CD21, CD23 (subset), CD35 (small subset), CXCL13 (subset), vimentin, fascin, and clusterin, suggestive of FDC origin (Fig. ). The malignant cells also expressed CD4 and CD5 (subset) but were negative for all other T-cell markers (CD2, CD3, CD7, CD8, CD43, TIA-1, BF-1). Since a subset of FDC sarcomas can be associated with indolent T-lymphoblastic proliferations, TdT stain was performed and is negative. EBV was negative by in situ hybridization (EBER). Podoplanin (D2-40), which can be utilized as a marker for follicular dendritic cells was negative in our case. In addition, the malignant cells were positive for PRAME but negative for all other melanoma markers (S100, HMB45, Melan A, and SOX10). The infiltrate involved the parotid gland parenchyma and directly adjacent lymph nodes. Table illustrates the different antibody clones used in the case. A next-generation sequencing (NGS) test was performed (Tempus 648 genes xT panel) and it detected 8 likely pathogenic somatic variants, including TP53, RB1, and FBXW7 loss-of-function variants. B-cell gene rearrangement studies by polymerase chain reaction (PCR) were performed but showed inconclusive results. Table illustrates the different mutations detected along with their variant allele frequency (VAF). Taken together, the overall picture supports a diagnosis of follicular dendritic cell (FDC) sarcoma. A follow-up appointment was arranged with the Radiation Oncology department for further assessment and management. Follicular dendritic cells are a specialized type of dendritic cells that are largely restricted to lymphoid follicles. They form dense three-dimensional meshworks within benign follicles, which maintain the follicular architecture . FDC sarcoma is a neoplastic proliferation of cells showing morphologic and immunophenotypic features of follicular dendritic cells . The etiology of that neoplastic transformation is unknown although it may evolve in situations in which there is FDC hyperplasia and overgrowth . It usually occurs de novo; however, it can sometimes occur in association with hyaline vascular Castleman disease, whether simultaneously or as a succeeding event . It presents as a painless solid mass, usually nodal (mainly cervical lymph nodes) but it can also involve extra nodal sites, such as tonsils, spleen, skin, and gastrointestinal tract . A new variant has been recently described: EBV-positive inflammatory follicular dendritic cell tumor and is reported to occur exclusively in the liver and spleen, exhibit more interspersed lymphoplasmacytic infiltrate, and express EBV by in situ hybridization . Overall, FDC sarcoma is considered a low-grade sarcoma that has a significant recurrence rate in nearly half the cases, and it also can metastasize . Surgical resection remains the best treatment for these tumors. Histologically, these tumors can be difficult to diagnose, as the morphological spectrum is broad and often causes confusion. Cytological atypia is present only in a subset of cases and mitotic figures are common but highly variable in number. By immunohistochemistry, FDCs express CD21, CD23, CD35, CXCL13, and clusterin. They also usually express vimentin, fascin, HLA-DR, and EMA and variably positive for CD68, S100, and CD45 . Clusterin staining is reported to be highly sensitive (100%) and specific (93%) and along with CD21 and CD23, constitute the essential stains required to establish a definitive diagnosis . PRAME stain exhibits diffuse positivity in most melanomas, while typically presenting as negative or showing limited and focal immunoreactivity in nevi . Variable degrees of PRAME staining have been sporadically observed in other malignant tumors, including most synovial sarcomas, myxoid liposarcomas, and malignant peripheral nerve sheath tumors (MPNST) . Other neoplasms such as seminomas and carcinomas of various origins including endometrial, serous ovarian, mammary ductal, lung, and renal showed an intermediate proportion of cases and variable extent of tumor cells positive for PRAME protein expression . To our knowledge, PRAME positivity has not been reported in FDC sarcoma before. In our case, PRAME is positive but all other melanoma markers (S100, HMB45, Melan A, and SOX10) are negative. Few FDC sarcoma cases with aberrant phenotype have been reported before including a case of intra-abdominal FDC sarcoma with pleomorphic features and aberrant expression of neuroendocrine markers , an unusual case of FDC sarcoma of the omentum with pleomorphic morphology and aberrant cytokeratin expression , another case with aberrant T-cell antigen expression , and a clinicopathologic study of 15 FDC cases with expression of MDM2, somatostatin receptor 2A, and PD-L1 . Although genetic drivers for tumorigenesis in FDC are largely unknown, recent genomic profiling studies have revealed several recurrent gene alterations in FDC sarcoma, including BRAF V600E mutation and loss-of-function variants in tumor suppressor genes involved in the regulation of NF-κB pathway and cell cycle, such as NFKBIA, CYLD, CDKN2A, and RB1 genes . In addition, genomic profiling for one patient with primary esophageal follicular dendritic cell sarcoma revealed pathogenic variants in multiple genes, including CHEK2, FAT1, TP53, DPYD, ERBB2IP, FBXW7, KMT2D, PPP2R1A, and TSC2 . The NGS results for this patient identified loss-of-function pathogenic variants in RB1 (p.W516*), TP53 (p.G187D), and FBXW7 (p.S294fs), which have been reported previously in FDC sarcoma patients, supporting the FDC sarcoma diagnosis. In conclusion, we report a case of FDC sarcoma with an unusual extranodal localization in the parotid gland. Furthermore, the aberrant positive expression of the melanocytic marker PRAME has not been reported before. All other melanocytic markers were negative in our case and the characteristic FDC markers are positive.
Adaptive scales of integration and response latencies in a critically-balanced model of the primary visual cortex
534c770d-4ece-4652-b162-d0ff54d897e0
5922535
Physiology[mh]
Stimuli in the natural world have quantitative characteristics that vary over staggering ranges. Our nervous system evolved to parse such widely-ranging stimuli, and research into how the nervous system can cope with such ranges has led to considerable advances in our understanding of neural circuitry. For example at the sensory transduction level, the physical magnitudes encoded into primary sensors, such as light intensity, sound pressure level and olfactant concentration, vary over exponentially-large ranges, leading to the Weber-Fechner law . As neuronal firing rates cannot vary over such large ranges, the encoding process must compress physical stimuli into the far more limited ranges of neural activity that represent them. These observations have stimulated a large amount of research into the mechanisms underlying nonlinearly compression of physical stimuli in the nervous system. Of relevance to our later discussion is the nonlinear compression of sound intensity in the early auditory pathways [ – ], where it has been shown that poising the active cochlear elements on a Hopf bifurcation leads to cubic-root compression. But other characteristics besides the raw physical magnitude still vary hugely. The wide range of spatial extents and correlated linear structures present in visual scenery [ – ] leads to a more subtle problem, if we think of the visual areas as fundamentally limited by corresponding anatomical connectivity. Research into this problem has been focused on elucidating the nature of receptive fields of neurons in the primary visual cortex (V1) [ – ]. Studies have found that as the contrast of a stimulus is decreased, the receptive field size or area of spatial summation in visual space increases ( ) [ , , , ]. As an example of contextual modulation of neuronal responses, this problem has naturally received theoretical attention [ – ]. However, current literature does not describe this phenomenon as structurally integral to the neural architecture but rather either highlight a different set of features or the contextual modulations are explicitly written in an ad hoc fashion. Our aim is to develop a model which displays this phenomenon structurally, as a direct consequence of the neural architecture. In our proposed models, multiple length scales emerge naturally without any fine tuning of the system’s parameters. This leads to length-tuning curves similar to the ones measured in Kapadia et al . over the entire range ( ) . The findings of Kapadia et al . demonstrate that receptive fields in V1 are not constant but instead grow and shrink, seemingly beyond naive anatomical parameters, according to stimulus contrast. The “computation” being carried out is not fixed but is itself a function of the input. Let us examine this distinction carefully. There are numerous operations in image processing, such as Gaussian blurs or other convolutional kernels , whose spatial range is fixed. It is very natural to imagine neural circuitry having actual physical connections corresponding to the nonzero elements of a convolutional kernel, and in fact a fair amount of effort has been expended trying to identify actual synapses corresponding to such elements . There are, however, other image-processing operations, such as floodfill (the “paint bucket”) whose spatial extent is entirely dependent on the input; the problem of “binding” of perceptual elements is usually thought about in this way, and mechanisms posited to underlie such propagation dynamics include synchronization of oscillations acting in a vaguely paint-bucket-like way [ – ]. This dichotomy is artificial because these are only the two extremes of a potentially continuous range. While the responses of neurons in V1 superficially appear to be convolutional kernels, their strong dependence on input characteristics, particularly the size of the receptive field, demonstrates a more complex logic in which spatial extent is determined by specific characteristics of the input. What is the circuitry underlying this logic? Neurons in the primary visual cortex are laterally connected to other neurons on the cortical surface and derive input from them. Experiments have shown that the spatial extent on the cortical surface from which neurons derive input from other neurons through such lateral interactions varies with the contrast of the stimulus . In the absence of stimulus contrast, spike-triggered traveling waves of activity propagate over large areas of cortex. As contrast is increased, the waves become weaker in amplitude and travel over increasingly small distances. These experiments suggest that the change in spatial summation area with increasing stimulus contrast may be consistent with the change in the decay constants of the traveling wave activity. However, no extant experiment directly links changes in summation in visual space to changes in integration on the cortical surface, and no explicit model of neural architecture has been shown to simultaneously account for, and thus connect, the input-dependence of spatial summation and lateral integration in V1. The latter one is our aim, and a crucial clue will come from the input-dependence of latencies. Recently, a critically-balanced network model of cortex was proposed to explain the contrast dependence of functional connectivity . It was shown that in the absence of input, the model exhibits wave-like activity with an infinitely-long ranged susceptibility, while in the presence of input, perturbed network activity decays exponentially with an attenuation constant that increases with the strength of the input. These results are in direct agreement with Nauhaus et al . . We will now demonstrate that a similar model also leads to adaptive scales of spatial integration in visual space. Our model makes two key assumptions. The first is a local, not just global, balance of excitation and inhibition across the entire network; all eigenmodes of the network are associated with purely imaginary eigenvalues. It has been shown that such a critically-balanced configuration can be achieved by simulating a network of neurons with connections evolving under an anti-Hebbian rule . The second key assumption is that all interactions in the network are described by the connectivity matrix; nonlinearities do not couple distinct neurons in the network. In dynamical systems theory, the existence of purely imaginary eigenvalues implies the existence of an invariant subspace of the activity known as a center manifold . In contrast to hyperbolic fixed points , where the linearization of the system fully describes the topological structure of the local solution, dynamics on center manifolds are not dominated by the linearization. This leads to complex nonlinear behavior where nonlinear terms and input parameters play a crucial role in determining the properties of the system such as relaxation timescales and correlation lengths . In the model presented in this paper, the center manifold is full dimensional, and thus the rich, complex behavior we will be discussing is not surprising. We postulate that, in general, neural systems utilize center manifolds in order to flexibly integrate sensory input. Our aim in this paper is not to provide a detailed neuroanatomical and physiological model of V1, but rather to construct a toy model which provides an existence proof that center manifold dynamics can account for and connect three input-dependent computational properties of V1. This approach of constructing a minimalist toy model to explain how a given mechanism can lead to a particular set of properties is common practice in theoretical physics and is the underlying philosophical approach of several well known theoretical neuroscience models, e.g. Wilson-Cowan equations, Hopfield networks, and Kuramoto models . There are a number of examples of dynamical criticality in neuroscience, including experimental studies in motor cortex , theoretical and experimental studies of line attractors in oculomotor control, line attractors in decision making , Hopf bifurcation in the auditory periphery [ – , , ], olfactory system , and theoretical work on regulated criticality . More recently, Solovey et al . performed stability analysis of high-density electrocorticography recordings covering an entire cerebral hemisphere in monkeys during reversible loss of consciousness. Performing a moving vector autoregressive analysis of the activity, they observed that the eigenvalues crowd near the critical line. During loss of consciousness, the numbers of eigenmodes at the edge of instability decrease smoothly but drift back to the critical line during recovery of consciousness. Dynamical criticality is distinct from statistical criticality which is related to the statistical mechanics of second-order phase transitions. It has been proposed that neural systems , and more generally biological systems , are statistically critical in the sense that they are poised near the critical point of a phase transitions . Statistical criticality is characterized by power law behavior such as avalanches [ – ] and long-range spatiotemporal correlations . While both dynamical criticality and statistical criticality have had success in neuroscience, their relation is still far from clear [ , , ]. We also examine the dynamics of the system and show that its activity exponentially decays to a limit cycle over multiple timescales, which depend on the strength of the input. Specifically, we find that the temporal exponential decay constants increase with increasing input strength. This result agrees with single-neuron studies which have found that response latencies in V1 decrease with increasing stimulus contrast [ , – ]. We now turn to describing our model. Let x ∈ C N be the activity vector for a network of neurons which evolve in time according to the normal form equation: x ˙ i = ∑ j A i j x j - | x i | 2 x i + I i ( t ) (1) In this model, originally proposed by Yan and Magnasco , neurons interact with one another through a skew-symmetric connectivity matrix A . The cubic-nonlinear term in the model is purely local and does not couple the activity states of distinct neurons, while the external input I ( t ) ∈ C N to the system may depend on time and have a complex spatial pattern. The original model considered a 2-D checkerboard topology of excitatory and inhibitory neurons. For theoretical simplicity and computational ease, we will instead consider a 1-D checkerboard layout of excitatory and inhibitory neurons which interact through equal strength, nearest neighbor connections ( ). In this case, A ij = (−1) j s ( δ i , j +1 + δ i , j −1 ), where i , j = 0, 1, …, N − 1 and s is the synaptic strength. Boundary conditions are such that the activity terminates to 0 outside of the finite network. We are specifically interested in the time-asymptotic response of the system, but explicitly integrating the stiff, high-dimensional ODE in ( ) is difficult. Fortunately, we can bypass numerical integration methods by assuming periodic input of the form I ( t ) = Fe iωt , where F ∈ C N and look for solutions X ( t ) = Ze iωt , where Z ∈ C N . Substituting these into ( ), we find that: 0 = ( A - i ω ) Z - | Z | 2 Z + F (2) And define g ( Z ) to be equal to the right hand side of ( ). The solution of ( ) can be found numerically by using the multivariable Newton-Raphson method in C N : Z ˜ → Z ˜ - J ( Z ˜ ) - 1 g ˜ ( Z ) (3) where Z ˜ and g ˜ are the concatenations of the real and imaginary parts of Z and g , respectively. J is the Jacobian of g ˜ with respect to J i j ( z ) = ∂ g ˜ i ∂ z ˜ j Following the lead of previous work and experimental studies , we assume the input strength from lateral geniculate nucleus to V1 to be a linear function of the stimulus contrast. To then test how the response of a single neuron in our network varies with both the contrast and length of the stimulus, we select a center neuron at index c and then calculate, for a range of input strengths, the response of the neuron as a function of input length around it. Formally, for each input strength level B ∈ R , we solve ( ) for: F k ( B , l ) = { B v k i f k ∈ [ c - l , c + l ] 0 o t h e r w i s e (4) where k = 0, …, N − 1, v ∈ C N describes the spatial shape of the input, and 2 l + 1 is the length of the input in number of neurons. The response of the center neuron is taken as the modulus of Z c , and we focus on the case where ω is an eigenfrequency of A and v the corresponding eigenvector. The results for a 1-D checkerboard network of 64 neurons is shown in . Here we fix a center neuron and sweep across a small range of eigenfrequencies ω of A . The curves from bottom to top correspond to an ascending order of base-2 exponentially distributed input strengths C = 2 i . For all eigenfrequencies, the peak of the response curves shift towards larger input lengths as the input strength decreases. In fact, for very weak input, the response curves rise monotonically over the entire range of input lengths without ever reaching a maximum in this finite network. This is in contrast to the response curves corresponding to strong input, which always reach a maximum but, depending on the eigenfrequency, exhibit varying degrees of response suppression beyond the maximum. This is consistent with variability of response suppression in primary visual cortex studies . In , eigenfrequencies ω = 1.92, 1.96, 1.99 show the greatest amount of suppression while the others display little to none. To understand why certain eigenfrequencies lead to suppression, we fix the eigenfrequency to be ω = 1.92 and examine the response curves of different center neurons. The response of four center neurons (labeled by network position) and the modulus of the eigenfrequency’s corresponding eigenvector are plotted in . The center neurons closest to the zeros of the eigenvector experience the strongest suppression for long line lengths. Neuron 38 closer to the peak of the eigenvector’s modulus experiences almost zero suppression. This generally holds for all eigenvectors and neurons in the network as all eigenvectors are periodic in their components with an eigenvalue-dependent spatial frequency. The periodicity of the eigenvectors arise from the fact that A 2 , which shares the same eigenvectors as A , is a circulant matrix. To strengthen the connection between model and neurophysiology, one can consider a critically-balanced network with an odd number of neurons so that 0 is now an eigenfrequency of the system. In our model, input associated with the 0-eigenmode represents direct current input to the system which is what neurophysiologists utilize in experiments; the visual input is not flashed . Contrary to the even case, long range connections must be added on top of the nearest neighbor connectivity in order to recover periodic eigenvectors and hence suppression past the response curves maximums. Next, we show that the network not only selectively integrates input as a function of input strength but also operates on multiple time scales which flexibly adapt to the input. This behavior is not surprising given that in the case of a single critical Hopf oscillator, the half width of the resonance, the frequency range for which the oscillator’s response falls by a half, is proportional to the forcing strength of the input, Γ ∝ F 2 3 where Γ is the half-width F the input strength . Thus, decay constants in the case of a single critical oscillator should grow with the input forcing strength as F 2 3 . Assuming input Fe iωt , as described above, the network activity x ( t ), given by ( ), decays exponentially in time to a stable limit cycle, X ( t ) = Ze iωt . This implies that for any neuron i in the network, | x i ( t )| = e − bt f ( t )+ | Z i | during the approach to the limit cycle. We therefore plot log(|| x i ( t )| − | Z i ||) over the transient decay period and estimate the slope of the linear regimes. We do this for a nearly network size input length (input length = 29, N = 32) and a range of exponentially distributed input strengths. In , we plot representative transient periods of a single neuron corresponding to 3 input strengths: 2 −10 , 2 −4 , and 2 2 . For weak input there is a fast single exponential decay regime (red) that determines the system’s approach to the stable limit cycle. As we increase the input, however, the transient period displays two exponential decay regimes: the fast decay regime (red) which was observed in the presence of weak input and a new slow decay regime (blue) immediately preceding the stable limit cycle. For very large input strength, the slow decay regime becomes dominant. The multiple decay regimes is a surprising result which doesn’t appear in the case of a single critical Hopf oscillator. We estimate the exponential decay constants as a function of input strength and plot them on a log-log scale in . The red circles correspond to the fast decay regime, while the blue circles correspond to the slow decay regime, which becomes prominent for large forcings. We separately fit both the slow and fast decay regimes with a best fit line. Unsurprisingly, the slopes of the lines are equal and approximately 2 3 . Thus, the decay constants grow with the input as ∝ F 2 3 , where F is the input strength. This implies that the system operates on multiple timescales dynamically switching from one to another depending on the magnitude of the forcing. Larger forcings lead to faster network responses. In this paper, we consider a line of excitatory and inhibitory neurons, but our results hold equally well for a ring of neurons with periodic boundary conditions and appropriately chosen long range connections. Ring networks have extensively been studied as a model of orientation selectivity in V1 [ – ]. In agreement with recent findings , the critically-balanced ring network exhibits surround suppression in orientation space when long range connections are added on top of nearest neighbor connectivity. We have shown that a simple dynamical system poised at the onset of instability exhibits an input-strength-dependent scale of integration of the system’s input and input-strength-dependent response latencies. This finding strongly complements our previous results showing that a similar nonlinear process with fixed, nearest neighbor network connectivity leads to input-dependent functional connectivity. This system is thus the first proposed mechanism that can account for contrast dependence of spatial summation, functional connectivity, and response latencies. In this framework, these three characteristic properties of signal processing in V1 are intrinsically linked to one another. As our model is just a toy model of center manifold dynamics, we do not suggest that a ring or 1-D line topology is necessarily present in V1 anatomy; although, if the brain does indeed utilize center manifolds in the processing of sensory input, it might be the case that the full high dimensional phase space of cortical dynamics could be reduced to simple, low dimensional structures on the center manifold. The theory of V1 dynamics presented in this paper makes two testable predictions. The first is the specific form of the relationship between spatial and temporal frequencies of neural activity in V1. In physics, the relationship between the spatial and temporal frequencies in a system is known as the dispersion relation. Yan and Magnasco have shown that the dispersion relation of the system considered in this paper and described by is elliptical, c 2 k 2 + ω 2 = 1, where k and ω are the spatial and temporal frequencies, respectively, and c is a constant. If our theory is correct, multielectrode array recordings in V1 should reveal elliptic dispersion relations. Unfortunately, we are unaware of any studies that have examined dispersion relations in V1. Our theory also makes testable predictions regarding temporal response latencies in V1. In particular, our theory implies that temporal decay constants in V1 should increase as a power law with the contrast level. The predicted exponent for the power law is 2/3, which could be tested for in single or multielectrode recordings. Experiments could also test for the presence of multiple relaxation timescales, which our model predicts.
Expanding the scope of tele-ophthalmology from vision centers to home
0c732567-da67-4cba-9aa7-429a9435c994
7933896
Ophthalmology[mh]
Nil. There are no conflicts of interest.
Yorkshire Lung Screening Trial (YLST) pathway navigation study: a protocol for a nested randomised controlled trial to evaluate the effect of a pathway navigation intervention on lung cancer screening uptake
fddefba1-ec35-47b8-8ff0-0fac222ecdde
11243133
Patient-Centered Care[mh]
Lung cancer is the most common cause of cancer death globally, causing approximately 35 000 deaths per year in the UK. The low survival rate (around 85% of people die within 5 years of a diagnosis ) is largely attributed to the majority of people being diagnosed at a late stage when treatment is less likely to be curative. Survival rates are lower still for people experiencing socioeconomic deprivation yet their incidence rates are highest. Globally, trials of lung cancer screening using low-dose CT scanning (LDCT) to detect asymptomatic, early-stage disease, have demonstrated a reduction in lung cancer mortality compared with controls in high-risk populations (adults over 55 with a smoking history). A meta-analysis of nine trials found LDCT screening was associated with a 16% relative reduction in lung cancer mortality compared with non-screened controls. Based on this combined evidence, the UK National Screening Committee has recommended implementation of a nationwide targeted lung cancer screening programme. This programme is being rolled out across England via the expansion of the Targeted Lung Health Check programme, which currently offers LDCT lung cancer screening to people at high risk of lung cancer. While lung cancer screening offers great promise for reducing lung cancer deaths, care must be taken not to widen the existing lung cancer mortality deprivation gap. Socioeconomic and smoking-related inequalities in uptake have consistently been observed across trials and services internationally, with uptake significantly lowest among people living in the most deprived areas, and those who currently smoke. There is also emerging evidence of lower participation by ethnic minority groups, such as those of white ethnicity other than British. This means those communities with the highest lung cancer incidence are least likely to be screened, reducing both the equity and effectiveness of screening. The ongoing Yorkshire Lung Screening Trial (YLST), which invites adults with a smoking history aged 55–80 to a telephone-based risk assessment to determine eligibility for a Lung Health Check screening appointment (subsequently offered at a mobile unit), found just 50.8% of those invited took up the offer of the telephone assessment in the first round of screening. Crucially, uptake was skewed towards those from more affluent areas, and those people who had quit smoking (compared with those who continue to smoke). This was despite the use of evidence-based interventions from other screening programmes, including general practitioner (GP) endorsement, a low-burden leaflet and reminder invitations. These figures compare favourably to other UK trials and pilots, which have a 20.4%–52.6% range in response. Pathway navigation (PN), sometimes referred to as patient navigation, aims to help individuals overcome barriers to early and effective diagnosis and treatment, particularly among populations who experience inequalities in cancer outcomes. It can be split into three phases: (1) navigation to screening, (2) diagnostic evaluation and (3) treatment. Research in the USA suggests navigation programmes are effective at improving participation in different types of cancer screening and an American Thoracic Society statement concluded navigation should be integrated within lung cancer screening programmes as a strategy to reduce disparities in uptake. In a randomised trial (n=1200) comparing navigators with usual care among a lower socioeconomic population, uptake was significantly improved in the navigator arm (23.5%) relative to usual care (8.6%). To date, there have been no published studies specifically testing PN for individuals invited to lung cancer screening in the UK. One small feasibility study tested whether navigation by specialist screening practitioners could improve the uptake of bowel scope screening. However, in that context, it was not feasible to successfully implement the intervention as telephone numbers were not available from health records and participants had to explicitly consent to take part in the study before providing their telephone number to receive navigation. The authors reflected this resulted in a selection bias, whereby only participants engaged in the screening process (and arguably those least likely to require navigation) would consent to take part. However, many cancer alliances in the UK (which do not require consent prior to patient contact) have demonstrated the feasibility of using telephone-based interventions to improve the uptake of cancer screening where numbers are available. For example, a case study from West London RM Partners Cancer Alliance reports successful contact of 13 000 individuals by telephone of which 25% subsequently participated in bowel cancer screening. While the potential of PN to reach high-risk non-responder populations is promising, adoption of this type of intervention within a national screening programme in the UK would involve considerable resources. It is, therefore, vital that its effect is analysed within the UK health system to demonstrate efficacy and cost-effectiveness. Likewise, it is crucial to understand any barriers to intervention efficacy that may lay in implementation and service design. The YLST recently received funding for a third round of screening to continue the provision of biennial Lung Health Checks. This offers an opportunity to further develop an understanding of barriers to participation and to test if PN can maximise engagement in a lung cancer screening programme in the UK context. Therefore, the aim of the study is to determine whether PN results in more individuals participating in the telephone risk assessment and, if eligible, the LDCT screening scan as part of their Lung Health Check, compared with usual written invitation within a previous non-responder population. In addition, a mixed-methods process evaluation with people who have repeatedly not responded to invitation will provide an avenue to understand issues preventing individuals from considering or being able to take part in LDCT lung cancer screening, as well as how PN might work, its acceptability and how it could be protocolised at scale, to inform screening invitation and delivery. Study design A two-arm randomised controlled trial (RCT) with mixed-methods process evaluation nested within the YLST. The methods of the YLST are published elsewhere. The study flow is described in . The protocol is reported in line with the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) statement ( ). Outcomes For the nested RCT, the outcomes are stated below. These outcomes will also be reported for the process evaluation. Coprimary outcomes The number undergoing initial telephone assessment of lung cancer risk. The number undergoing a LDCT screening scan. Secondary outcomes Demographic, clinical and risk parameters of people undergoing telephone assessment of lung cancer risk. The number undergoing telephone assessment of lung cancer risk who were found eligible for screening (irrespective of whether they subsequently attended for LDCT). The number of screen-detected lung cancers diagnosed. Costs of delivering the PN intervention. Quantitative and qualitative process evaluation (assessed through observations of strategies and barriers; acceptability; intervention fidelity; and semistructured interviews). Demographic, clinical, risk and lung cancer outcomes will also be compared with the original YLST baseline responder group, and the group that responded following written reinvitation during the second round of screening in YLST. Participants Repeat non-responders (nested RCT and process evaluation participants): Participants for the RCT will be those who have not responded to a postal invitation to participate in a telephone assessment of lung cancer risk (Lung Health Check eligibility) within the YLST, either as part of the initial invitation round or after being reinvited approximately 2 years later. Disengaged responders (process evaluation participants): Participants in this group will not be randomised within the nested RCT but will all receive PN and be recruited to the process evaluation. This group is made up of YLST participants who have responded to at least one of the first two rounds of Lung Health Check invitations, undergone the telephone triage and were found to be high risk, but did not attend for their most recent LDCT screening scan. This includes people first found to be eligible through telephone assessment of lung cancer risk during: The first screening round, those who attended their first scan, remained eligible at their subsequent eligibility reassessment during the second round but did not attend for their second LDCT screening scan appointment. The second screening round but did not attend their LDCT screening scan appointment. The group does not include people who were risk assessed during the first screening round, found to be eligible, but did not attend their first scan (since they were already recontacted during the second round). Eligibility criteria For the nested RCT, the full eligibility criteria are stated below. Eligibility criteria for the process evaluation are as described above, including both repeat non-responders and disengaged responders. Inclusion criteria Aged 55–80 (inclusive) at the time of the planned screening block (all non-responding participants will be assigned an intended screening block between March 2023 and October 2024; participant age will be calculated on the first day of screening during that block to determine eligibility). Assigned to the intervention group (lung cancer screening arm) of YLST. No previous YLST telephone triage assessment. Exclusion criteria Previous registration of dissent from data usage or further contact from YLST team. National data opt-out was recorded prior to the screening round. Lung cancer diagnosis within 5 years prior to the screening round. Any metastatic cancer diagnosis. No telephone number on Leeds Teaching Hospitals Electronic Patient Record. Randomisation Repeat non-responders will be randomised to PN or usual care (control) by household so that cohabiting individuals are assigned to the same arm of the study to avoid contamination. Randomisation will take place after completion of the second round of screening but before reinvitation to the third round of screening. All disengaged responders will receive PN (ie, no randomisation) and will not be randomised as part of the nested RCT, to avoid contaminating the sample (since disengaged responders represent a different subpopulation to repeat non-responders). Sample size Nested RCT The sample size for the RCT is calculated based on the number of people who have not responded to the invitation during the first screening round of YLST. Of the 22 128 people who did not respond during the baseline round, 20 074 people were aged ≤76 years at the time of original GP data extraction (and thus would be aged ≤80 years at the time of proposed PN randomisation) and remained alive at the time of designing the PN study. After accounting for an anticipated 10% response rate to written invitation during the second round, we estimated 18 067 potentially eligible participants for PN randomisation, of which 74.1% had contact numbers recorded, leaving a total eligible population of 13 388. An 8% response rate to written invitations is anticipated in the control arm, increasing to 12% in the PN arm, based on previous randomised navigator trials in lower socioeconomic populations. Baseline data from YLST indicate approximately 35% of telephone responders were deemed high risk. Assuming a similar rate in the repeat non-responders in this nested PN substudy and further, considering that only 84% of those assessed as high risk had an LDCT scan (as was observed during the first round of screening in YLST), would infer an increase from 2.4% (standard invitation arm) to 3.5% (PN arm) of those invited responding, being high risk and receiving an LDCT scan. Based on these assumptions, and with an anticipated 13 388 participants, as can be seen from the sample size calculation provided in , there would be ample power (>90%) to detect an increase in both response with telephone assessment and attendance in terms of undergoing an LDCT scan using two-sided tests at the 2.5% significance level to account for coprimary outcomes and using a continuity correction, and adjusting for centre effects by household as appropriate. Process evaluation The sample size for each component of the process evaluation is as follows: Quantitative evaluation of barriers and strategies to undergoing a risk assessment and/or attending a screening appointment discussed during PN calls. These will be recorded using a standardised observation proforma ( ) after every PN telephone call (up to five per participant) and after every telephone triage telephone appointment conducted within the control arm. Semistructured interviews (n<60): Purposive sampling will be used to recruit a diverse sample of participants who have undergone the PN intervention with respect to demographic characteristics, smoking status and intention to attend a lung screening appointment. Up to 30 participants will be interviewed within each group (ie, non-responders and disengaged), in line with norms for qualitative research, and to ensure adequate representation of those subsequently attending and declining a screening appointment. Fidelity observations: A subset of 200 PN calls will be observed. The number has been selected to ensure variability in the calls being observed (with regard to screening day, team member delivering the call, etc) while being possible to conduct within the time frame and in line with other similar studies of intervention fidelity. Intervention Development A theory-and-evidence-based intervention development process was followed. The navigation-to-screening manual was informed by a bowel screening navigation intervention. Strategies and barriers were based on existing research investigating factors influencing lung cancer screening uptake and through discussion with YLST trial practitioners and patient and public involvement (PPI) representatives. Theoretically, the intervention is informed by the Capability Opportunity Motivation-Behaviour (COM-B) model, which is the central behaviour system of the behaviour change wheel. PN intervention protocol Component 1: advanced notification Participants receive an advanced notification letter ( ) ~3 weeks before a scheduled telephone appointment. The notification frames the appointment as an ‘Introduction to Lung Health Checks’ telephone call. It includes a low-burden leaflet ( ) with basic information about the Lung Health Check (similar to that sent in the control arm, but with instructions to call the team to book a risk-assessment removed) and an A5 flyer with dissent information. Component 2: protocolised ‘Introduction to Lung Health Checks’ telephone call At the prearranged time, a pathway navigator telephones the participant to introduce and discuss the lung screening offer. If contact is unsuccessful, the pathway navigator attempts two more telephone calls at different times on varied days. The PN team (pathway navigators) comprise trained senior clinical trials assistants, administrative officers and research nurses. shows the phased telephone approach as protocolised in the PN training manual ( ). There are four key stages, with step 3 (strategies and barriers discussion) taking place flexibly throughout the call: Introducing the offer of lung screening (Lung Health Check). Assessing eligibility for LDCT lung cancer screening. Eliciting and addressing any barriers to capability, opportunity and motivation for attending screening using core communication competencies, problem-solving, behaviour change techniques and practical arrangements. If eligible, arranging either a Lung Health Check appointment, another navigation telephone conversation (to offer further support/deliver the intended strategy as per protocol), or respecting the individual’s choice not to engage. Component 3: reminder telephone calls If the participant books a screening appointment, they will also receive reminder calls approximately 1 week and 1 day before the booked appointment. PN intervention training All pathway navigators receive the protocolised training manual ( ). This covers the structure of the telephone call; principles of informed choice and types of non-responder (informed, misinformed, disengaged, inclined); common barriers and suggested solutions for navigators to offer for individuals to overcome them; and behavioural science-informed communication techniques (motivational interviewing, use of simple language, implementation intentions and teach-back). All pathway navigators attend a training course in motivational interviewing, and a 1-day interactive session led by behavioural scientists on the contents of the training manual before delivering the intervention. Check-ins are arranged 1 month after beginning the calls, and then every 3–6 months for feedback on experiences and to exchange knowledge. Control arm (usual care) The control arm will receive the same invitation materials used to reinvite non-responders 2 years after the initial screening round of YLST (a GP-endorsed invitation letter, a low-burden reinvitation information leaflet, a dissent information statement and a follow-up reminder letter if they do not respond to the initial reinvitation). People without an available telephone number (who are therefore excluded from the PN arm) would be reinvited using the same method but not included in the analysis for this substudy, as there may be systematic bias in the characteristics of those who do not have a telephone number recorded. The letters invite the individual to telephone the YLST team to undergo a telephone-based risk assessment for a Lung Health Check appointment where LDCT lung cancer screening is offered. A two-arm randomised controlled trial (RCT) with mixed-methods process evaluation nested within the YLST. The methods of the YLST are published elsewhere. The study flow is described in . The protocol is reported in line with the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) statement ( ). For the nested RCT, the outcomes are stated below. These outcomes will also be reported for the process evaluation. The number undergoing initial telephone assessment of lung cancer risk. The number undergoing a LDCT screening scan. Demographic, clinical and risk parameters of people undergoing telephone assessment of lung cancer risk. The number undergoing telephone assessment of lung cancer risk who were found eligible for screening (irrespective of whether they subsequently attended for LDCT). The number of screen-detected lung cancers diagnosed. Costs of delivering the PN intervention. Quantitative and qualitative process evaluation (assessed through observations of strategies and barriers; acceptability; intervention fidelity; and semistructured interviews). Demographic, clinical, risk and lung cancer outcomes will also be compared with the original YLST baseline responder group, and the group that responded following written reinvitation during the second round of screening in YLST. Repeat non-responders (nested RCT and process evaluation participants): Participants for the RCT will be those who have not responded to a postal invitation to participate in a telephone assessment of lung cancer risk (Lung Health Check eligibility) within the YLST, either as part of the initial invitation round or after being reinvited approximately 2 years later. Disengaged responders (process evaluation participants): Participants in this group will not be randomised within the nested RCT but will all receive PN and be recruited to the process evaluation. This group is made up of YLST participants who have responded to at least one of the first two rounds of Lung Health Check invitations, undergone the telephone triage and were found to be high risk, but did not attend for their most recent LDCT screening scan. This includes people first found to be eligible through telephone assessment of lung cancer risk during: The first screening round, those who attended their first scan, remained eligible at their subsequent eligibility reassessment during the second round but did not attend for their second LDCT screening scan appointment. The second screening round but did not attend their LDCT screening scan appointment. The group does not include people who were risk assessed during the first screening round, found to be eligible, but did not attend their first scan (since they were already recontacted during the second round). For the nested RCT, the full eligibility criteria are stated below. Eligibility criteria for the process evaluation are as described above, including both repeat non-responders and disengaged responders. Inclusion criteria Aged 55–80 (inclusive) at the time of the planned screening block (all non-responding participants will be assigned an intended screening block between March 2023 and October 2024; participant age will be calculated on the first day of screening during that block to determine eligibility). Assigned to the intervention group (lung cancer screening arm) of YLST. No previous YLST telephone triage assessment. Exclusion criteria Previous registration of dissent from data usage or further contact from YLST team. National data opt-out was recorded prior to the screening round. Lung cancer diagnosis within 5 years prior to the screening round. Any metastatic cancer diagnosis. No telephone number on Leeds Teaching Hospitals Electronic Patient Record. Aged 55–80 (inclusive) at the time of the planned screening block (all non-responding participants will be assigned an intended screening block between March 2023 and October 2024; participant age will be calculated on the first day of screening during that block to determine eligibility). Assigned to the intervention group (lung cancer screening arm) of YLST. No previous YLST telephone triage assessment. Previous registration of dissent from data usage or further contact from YLST team. National data opt-out was recorded prior to the screening round. Lung cancer diagnosis within 5 years prior to the screening round. Any metastatic cancer diagnosis. No telephone number on Leeds Teaching Hospitals Electronic Patient Record. Repeat non-responders will be randomised to PN or usual care (control) by household so that cohabiting individuals are assigned to the same arm of the study to avoid contamination. Randomisation will take place after completion of the second round of screening but before reinvitation to the third round of screening. All disengaged responders will receive PN (ie, no randomisation) and will not be randomised as part of the nested RCT, to avoid contaminating the sample (since disengaged responders represent a different subpopulation to repeat non-responders). Nested RCT The sample size for the RCT is calculated based on the number of people who have not responded to the invitation during the first screening round of YLST. Of the 22 128 people who did not respond during the baseline round, 20 074 people were aged ≤76 years at the time of original GP data extraction (and thus would be aged ≤80 years at the time of proposed PN randomisation) and remained alive at the time of designing the PN study. After accounting for an anticipated 10% response rate to written invitation during the second round, we estimated 18 067 potentially eligible participants for PN randomisation, of which 74.1% had contact numbers recorded, leaving a total eligible population of 13 388. An 8% response rate to written invitations is anticipated in the control arm, increasing to 12% in the PN arm, based on previous randomised navigator trials in lower socioeconomic populations. Baseline data from YLST indicate approximately 35% of telephone responders were deemed high risk. Assuming a similar rate in the repeat non-responders in this nested PN substudy and further, considering that only 84% of those assessed as high risk had an LDCT scan (as was observed during the first round of screening in YLST), would infer an increase from 2.4% (standard invitation arm) to 3.5% (PN arm) of those invited responding, being high risk and receiving an LDCT scan. Based on these assumptions, and with an anticipated 13 388 participants, as can be seen from the sample size calculation provided in , there would be ample power (>90%) to detect an increase in both response with telephone assessment and attendance in terms of undergoing an LDCT scan using two-sided tests at the 2.5% significance level to account for coprimary outcomes and using a continuity correction, and adjusting for centre effects by household as appropriate. Process evaluation The sample size for each component of the process evaluation is as follows: Quantitative evaluation of barriers and strategies to undergoing a risk assessment and/or attending a screening appointment discussed during PN calls. These will be recorded using a standardised observation proforma ( ) after every PN telephone call (up to five per participant) and after every telephone triage telephone appointment conducted within the control arm. Semistructured interviews (n<60): Purposive sampling will be used to recruit a diverse sample of participants who have undergone the PN intervention with respect to demographic characteristics, smoking status and intention to attend a lung screening appointment. Up to 30 participants will be interviewed within each group (ie, non-responders and disengaged), in line with norms for qualitative research, and to ensure adequate representation of those subsequently attending and declining a screening appointment. Fidelity observations: A subset of 200 PN calls will be observed. The number has been selected to ensure variability in the calls being observed (with regard to screening day, team member delivering the call, etc) while being possible to conduct within the time frame and in line with other similar studies of intervention fidelity. The sample size for the RCT is calculated based on the number of people who have not responded to the invitation during the first screening round of YLST. Of the 22 128 people who did not respond during the baseline round, 20 074 people were aged ≤76 years at the time of original GP data extraction (and thus would be aged ≤80 years at the time of proposed PN randomisation) and remained alive at the time of designing the PN study. After accounting for an anticipated 10% response rate to written invitation during the second round, we estimated 18 067 potentially eligible participants for PN randomisation, of which 74.1% had contact numbers recorded, leaving a total eligible population of 13 388. An 8% response rate to written invitations is anticipated in the control arm, increasing to 12% in the PN arm, based on previous randomised navigator trials in lower socioeconomic populations. Baseline data from YLST indicate approximately 35% of telephone responders were deemed high risk. Assuming a similar rate in the repeat non-responders in this nested PN substudy and further, considering that only 84% of those assessed as high risk had an LDCT scan (as was observed during the first round of screening in YLST), would infer an increase from 2.4% (standard invitation arm) to 3.5% (PN arm) of those invited responding, being high risk and receiving an LDCT scan. Based on these assumptions, and with an anticipated 13 388 participants, as can be seen from the sample size calculation provided in , there would be ample power (>90%) to detect an increase in both response with telephone assessment and attendance in terms of undergoing an LDCT scan using two-sided tests at the 2.5% significance level to account for coprimary outcomes and using a continuity correction, and adjusting for centre effects by household as appropriate. The sample size for each component of the process evaluation is as follows: Quantitative evaluation of barriers and strategies to undergoing a risk assessment and/or attending a screening appointment discussed during PN calls. These will be recorded using a standardised observation proforma ( ) after every PN telephone call (up to five per participant) and after every telephone triage telephone appointment conducted within the control arm. Semistructured interviews (n<60): Purposive sampling will be used to recruit a diverse sample of participants who have undergone the PN intervention with respect to demographic characteristics, smoking status and intention to attend a lung screening appointment. Up to 30 participants will be interviewed within each group (ie, non-responders and disengaged), in line with norms for qualitative research, and to ensure adequate representation of those subsequently attending and declining a screening appointment. Fidelity observations: A subset of 200 PN calls will be observed. The number has been selected to ensure variability in the calls being observed (with regard to screening day, team member delivering the call, etc) while being possible to conduct within the time frame and in line with other similar studies of intervention fidelity. Development A theory-and-evidence-based intervention development process was followed. The navigation-to-screening manual was informed by a bowel screening navigation intervention. Strategies and barriers were based on existing research investigating factors influencing lung cancer screening uptake and through discussion with YLST trial practitioners and patient and public involvement (PPI) representatives. Theoretically, the intervention is informed by the Capability Opportunity Motivation-Behaviour (COM-B) model, which is the central behaviour system of the behaviour change wheel. PN intervention protocol Component 1: advanced notification Participants receive an advanced notification letter ( ) ~3 weeks before a scheduled telephone appointment. The notification frames the appointment as an ‘Introduction to Lung Health Checks’ telephone call. It includes a low-burden leaflet ( ) with basic information about the Lung Health Check (similar to that sent in the control arm, but with instructions to call the team to book a risk-assessment removed) and an A5 flyer with dissent information. Component 2: protocolised ‘Introduction to Lung Health Checks’ telephone call At the prearranged time, a pathway navigator telephones the participant to introduce and discuss the lung screening offer. If contact is unsuccessful, the pathway navigator attempts two more telephone calls at different times on varied days. The PN team (pathway navigators) comprise trained senior clinical trials assistants, administrative officers and research nurses. shows the phased telephone approach as protocolised in the PN training manual ( ). There are four key stages, with step 3 (strategies and barriers discussion) taking place flexibly throughout the call: Introducing the offer of lung screening (Lung Health Check). Assessing eligibility for LDCT lung cancer screening. Eliciting and addressing any barriers to capability, opportunity and motivation for attending screening using core communication competencies, problem-solving, behaviour change techniques and practical arrangements. If eligible, arranging either a Lung Health Check appointment, another navigation telephone conversation (to offer further support/deliver the intended strategy as per protocol), or respecting the individual’s choice not to engage. Component 3: reminder telephone calls If the participant books a screening appointment, they will also receive reminder calls approximately 1 week and 1 day before the booked appointment. A theory-and-evidence-based intervention development process was followed. The navigation-to-screening manual was informed by a bowel screening navigation intervention. Strategies and barriers were based on existing research investigating factors influencing lung cancer screening uptake and through discussion with YLST trial practitioners and patient and public involvement (PPI) representatives. Theoretically, the intervention is informed by the Capability Opportunity Motivation-Behaviour (COM-B) model, which is the central behaviour system of the behaviour change wheel. Component 1: advanced notification Participants receive an advanced notification letter ( ) ~3 weeks before a scheduled telephone appointment. The notification frames the appointment as an ‘Introduction to Lung Health Checks’ telephone call. It includes a low-burden leaflet ( ) with basic information about the Lung Health Check (similar to that sent in the control arm, but with instructions to call the team to book a risk-assessment removed) and an A5 flyer with dissent information. Component 2: protocolised ‘Introduction to Lung Health Checks’ telephone call At the prearranged time, a pathway navigator telephones the participant to introduce and discuss the lung screening offer. If contact is unsuccessful, the pathway navigator attempts two more telephone calls at different times on varied days. The PN team (pathway navigators) comprise trained senior clinical trials assistants, administrative officers and research nurses. shows the phased telephone approach as protocolised in the PN training manual ( ). There are four key stages, with step 3 (strategies and barriers discussion) taking place flexibly throughout the call: Introducing the offer of lung screening (Lung Health Check). Assessing eligibility for LDCT lung cancer screening. Eliciting and addressing any barriers to capability, opportunity and motivation for attending screening using core communication competencies, problem-solving, behaviour change techniques and practical arrangements. If eligible, arranging either a Lung Health Check appointment, another navigation telephone conversation (to offer further support/deliver the intended strategy as per protocol), or respecting the individual’s choice not to engage. Component 3: reminder telephone calls If the participant books a screening appointment, they will also receive reminder calls approximately 1 week and 1 day before the booked appointment. Participants receive an advanced notification letter ( ) ~3 weeks before a scheduled telephone appointment. The notification frames the appointment as an ‘Introduction to Lung Health Checks’ telephone call. It includes a low-burden leaflet ( ) with basic information about the Lung Health Check (similar to that sent in the control arm, but with instructions to call the team to book a risk-assessment removed) and an A5 flyer with dissent information. At the prearranged time, a pathway navigator telephones the participant to introduce and discuss the lung screening offer. If contact is unsuccessful, the pathway navigator attempts two more telephone calls at different times on varied days. The PN team (pathway navigators) comprise trained senior clinical trials assistants, administrative officers and research nurses. shows the phased telephone approach as protocolised in the PN training manual ( ). There are four key stages, with step 3 (strategies and barriers discussion) taking place flexibly throughout the call: Introducing the offer of lung screening (Lung Health Check). Assessing eligibility for LDCT lung cancer screening. Eliciting and addressing any barriers to capability, opportunity and motivation for attending screening using core communication competencies, problem-solving, behaviour change techniques and practical arrangements. If eligible, arranging either a Lung Health Check appointment, another navigation telephone conversation (to offer further support/deliver the intended strategy as per protocol), or respecting the individual’s choice not to engage. If the participant books a screening appointment, they will also receive reminder calls approximately 1 week and 1 day before the booked appointment. All pathway navigators receive the protocolised training manual ( ). This covers the structure of the telephone call; principles of informed choice and types of non-responder (informed, misinformed, disengaged, inclined); common barriers and suggested solutions for navigators to offer for individuals to overcome them; and behavioural science-informed communication techniques (motivational interviewing, use of simple language, implementation intentions and teach-back). All pathway navigators attend a training course in motivational interviewing, and a 1-day interactive session led by behavioural scientists on the contents of the training manual before delivering the intervention. Check-ins are arranged 1 month after beginning the calls, and then every 3–6 months for feedback on experiences and to exchange knowledge. The control arm will receive the same invitation materials used to reinvite non-responders 2 years after the initial screening round of YLST (a GP-endorsed invitation letter, a low-burden reinvitation information leaflet, a dissent information statement and a follow-up reminder letter if they do not respond to the initial reinvitation). People without an available telephone number (who are therefore excluded from the PN arm) would be reinvited using the same method but not included in the analysis for this substudy, as there may be systematic bias in the characteristics of those who do not have a telephone number recorded. The letters invite the individual to telephone the YLST team to undergo a telephone-based risk assessment for a Lung Health Check appointment where LDCT lung cancer screening is offered. Quantitative Strategies and barriers observations At the end of each call, PN intervention acceptability will be assessed by asking individuals if they minded being telephoned and whether they found the telephone conversation helpful. After each call, the navigator will record the barriers raised, and strategies used on a standardised proforma ( ), informed by the COM-B framework, PPI and stakeholder review, and previous research examining barriers to lung cancer screening in low participation groups at high risk of lung cancer. The same proforma will be completed following the completion of each telephone conversation with individuals in the control arm. Fidelity observations A subset of 200 PN calls will be observed live by a researcher who will score fidelity to the telephone protocol using a standardised proforma ( ). The proforma is based on similar work being conducted as part of the Yorkshire Enhanced Stop Smoking (YESS) smoking cessation trial, and guidelines on fidelity measure development. It will capture anonymous data relating to the degree to which navigators deliver the call in line with the manualised intervention. The observations will be conducted across different days with different team members delivering the calls to ensure variability in the sample. To pilot the process, the first 10 calls will be scored for fidelity by two researchers. Scores will be compared, discussed and scoring guidance adapted to ensure clarity if needed. The process will repeated for the next 10 calls to finalise scoring guidance. Inter-rater agreement will be reported for both pilot sessions to indicate reliability of the measure. Qualitative Semi-structured interviews At the end of each PN call, individuals will be asked if they are willing to be contacted about taking part in an interview study. Potential participants will be invited to take part by letter (posted and/or handed out during their LDCT screening appointment), with instructions to complete and return an expression of interest form by telephone, freepost letter or email to the behavioural science research team if they wish to take part. Eligible participants who are recruited in line with the purposive sampling strategy will give audiorecorded verbal consent to take part using the consent form. Semistructured interviews of ~1 hour will be carried out by a behavioural science researcher either by telephone or video call within 3 months of the PN intervention. The interview schedule ( ) was informed by the COM-B framework and previous research into the factors affecting lung cancer screening participation. Open questions, prompts and probes will be used to explore the acceptability and experience of the scheduled telephone appointment approach and the types of barriers and strategies discussed. Field notes on fidelity form During fidelity observations, researchers will record field notes on contextual information related to fidelity that may be important for results interpretation (eg, navigators’ response to indirect behavioural indicators). Strategies and barriers observations At the end of each call, PN intervention acceptability will be assessed by asking individuals if they minded being telephoned and whether they found the telephone conversation helpful. After each call, the navigator will record the barriers raised, and strategies used on a standardised proforma ( ), informed by the COM-B framework, PPI and stakeholder review, and previous research examining barriers to lung cancer screening in low participation groups at high risk of lung cancer. The same proforma will be completed following the completion of each telephone conversation with individuals in the control arm. Fidelity observations A subset of 200 PN calls will be observed live by a researcher who will score fidelity to the telephone protocol using a standardised proforma ( ). The proforma is based on similar work being conducted as part of the Yorkshire Enhanced Stop Smoking (YESS) smoking cessation trial, and guidelines on fidelity measure development. It will capture anonymous data relating to the degree to which navigators deliver the call in line with the manualised intervention. The observations will be conducted across different days with different team members delivering the calls to ensure variability in the sample. To pilot the process, the first 10 calls will be scored for fidelity by two researchers. Scores will be compared, discussed and scoring guidance adapted to ensure clarity if needed. The process will repeated for the next 10 calls to finalise scoring guidance. Inter-rater agreement will be reported for both pilot sessions to indicate reliability of the measure. At the end of each call, PN intervention acceptability will be assessed by asking individuals if they minded being telephoned and whether they found the telephone conversation helpful. After each call, the navigator will record the barriers raised, and strategies used on a standardised proforma ( ), informed by the COM-B framework, PPI and stakeholder review, and previous research examining barriers to lung cancer screening in low participation groups at high risk of lung cancer. The same proforma will be completed following the completion of each telephone conversation with individuals in the control arm. A subset of 200 PN calls will be observed live by a researcher who will score fidelity to the telephone protocol using a standardised proforma ( ). The proforma is based on similar work being conducted as part of the Yorkshire Enhanced Stop Smoking (YESS) smoking cessation trial, and guidelines on fidelity measure development. It will capture anonymous data relating to the degree to which navigators deliver the call in line with the manualised intervention. The observations will be conducted across different days with different team members delivering the calls to ensure variability in the sample. To pilot the process, the first 10 calls will be scored for fidelity by two researchers. Scores will be compared, discussed and scoring guidance adapted to ensure clarity if needed. The process will repeated for the next 10 calls to finalise scoring guidance. Inter-rater agreement will be reported for both pilot sessions to indicate reliability of the measure. Semi-structured interviews At the end of each PN call, individuals will be asked if they are willing to be contacted about taking part in an interview study. Potential participants will be invited to take part by letter (posted and/or handed out during their LDCT screening appointment), with instructions to complete and return an expression of interest form by telephone, freepost letter or email to the behavioural science research team if they wish to take part. Eligible participants who are recruited in line with the purposive sampling strategy will give audiorecorded verbal consent to take part using the consent form. Semistructured interviews of ~1 hour will be carried out by a behavioural science researcher either by telephone or video call within 3 months of the PN intervention. The interview schedule ( ) was informed by the COM-B framework and previous research into the factors affecting lung cancer screening participation. Open questions, prompts and probes will be used to explore the acceptability and experience of the scheduled telephone appointment approach and the types of barriers and strategies discussed. Field notes on fidelity form During fidelity observations, researchers will record field notes on contextual information related to fidelity that may be important for results interpretation (eg, navigators’ response to indirect behavioural indicators). At the end of each PN call, individuals will be asked if they are willing to be contacted about taking part in an interview study. Potential participants will be invited to take part by letter (posted and/or handed out during their LDCT screening appointment), with instructions to complete and return an expression of interest form by telephone, freepost letter or email to the behavioural science research team if they wish to take part. Eligible participants who are recruited in line with the purposive sampling strategy will give audiorecorded verbal consent to take part using the consent form. Semistructured interviews of ~1 hour will be carried out by a behavioural science researcher either by telephone or video call within 3 months of the PN intervention. The interview schedule ( ) was informed by the COM-B framework and previous research into the factors affecting lung cancer screening participation. Open questions, prompts and probes will be used to explore the acceptability and experience of the scheduled telephone appointment approach and the types of barriers and strategies discussed. During fidelity observations, researchers will record field notes on contextual information related to fidelity that may be important for results interpretation (eg, navigators’ response to indirect behavioural indicators). A statistical analysis plan will be drafted by the trial statistician and reviewed by an independent statistician. Baseline characteristics by the trial arm and a diagram depicting the number and flow of patients through the trial will be presented. Primary analysis for the nested RCT Descriptive analyses will be used to present numbers and proportions of interest as listed in the coprimary and secondary outcomes. To account for the related nature of the coprimary outcomes, proportions and absolute differences in proportions will be presented with associated 97.5% CIs. Formal primary comparisons of differences between groups will be made using logistic regression and adjusting for centre effects by household as appropriate. All other statistical tests will be two sided using a 5% significance level unless otherwise specified; 95% CIs will be reported as appropriate. Process evaluation Quantitative data Descriptive analyses will be used to anonymously report strategies and barriers to participation; and fidelity scores to describe and quantify the extent to which the call is delivered in line with the intervention manual. Qualitative data Applied thematic analysis will be used to inductively code qualitative data from the semi-structured interviews and fidelity field notes and to interpret themes within a skeletal conceptual framework, using qualitative data analysis software NVivo. The framework will initially be based on the COM-B model but may be iteratively expanded and adapted. Initial coding will be carried out by one researcher and a subset of randomly selected transcripts coded independently by a second researcher. There will be multiple opportunities for discussion and iteration of the emerging coding framework. Triangulation of mixed-methods data A concurrent triangulation design will be followed, with quantitative and qualitative data collected and analysed separately, but converged during interpretation to inform a logic model for PN in the lung cancer screening context. Costs A health economic analysis of the PN intervention will be conducted as part of the broader YLST trial aim of determining the cost-effectiveness of the YLST screening programme. Descriptive analyses will be used to present numbers and proportions of interest as listed in the coprimary and secondary outcomes. To account for the related nature of the coprimary outcomes, proportions and absolute differences in proportions will be presented with associated 97.5% CIs. Formal primary comparisons of differences between groups will be made using logistic regression and adjusting for centre effects by household as appropriate. All other statistical tests will be two sided using a 5% significance level unless otherwise specified; 95% CIs will be reported as appropriate. Quantitative data Descriptive analyses will be used to anonymously report strategies and barriers to participation; and fidelity scores to describe and quantify the extent to which the call is delivered in line with the intervention manual. Qualitative data Applied thematic analysis will be used to inductively code qualitative data from the semi-structured interviews and fidelity field notes and to interpret themes within a skeletal conceptual framework, using qualitative data analysis software NVivo. The framework will initially be based on the COM-B model but may be iteratively expanded and adapted. Initial coding will be carried out by one researcher and a subset of randomly selected transcripts coded independently by a second researcher. There will be multiple opportunities for discussion and iteration of the emerging coding framework. Triangulation of mixed-methods data A concurrent triangulation design will be followed, with quantitative and qualitative data collected and analysed separately, but converged during interpretation to inform a logic model for PN in the lung cancer screening context. Descriptive analyses will be used to anonymously report strategies and barriers to participation; and fidelity scores to describe and quantify the extent to which the call is delivered in line with the intervention manual. Applied thematic analysis will be used to inductively code qualitative data from the semi-structured interviews and fidelity field notes and to interpret themes within a skeletal conceptual framework, using qualitative data analysis software NVivo. The framework will initially be based on the COM-B model but may be iteratively expanded and adapted. Initial coding will be carried out by one researcher and a subset of randomly selected transcripts coded independently by a second researcher. There will be multiple opportunities for discussion and iteration of the emerging coding framework. A concurrent triangulation design will be followed, with quantitative and qualitative data collected and analysed separately, but converged during interpretation to inform a logic model for PN in the lung cancer screening context. A health economic analysis of the PN intervention will be conducted as part of the broader YLST trial aim of determining the cost-effectiveness of the YLST screening programme. REC and other regulatory review The YLST was approved by the Greater Manchester West Research Ethics Committee (REC) (18-NW-0012) and the Health Research Authority following review by the Confidentiality Advisory Group (CAG). The PN substudy was approved by the REC and CAG as a Substantial Amendment on 5 October 2022. Trial monitoring and governance Trial governance is overseen by the trial management group, the independent data monitoring committee and an independent trials steering committee. Full details are provided in the YLST protocol. Informed consent The YLST and nested PN study follow a service demonstration design to provide a realistic indication of uptake following PN in a real-world clinical context, improve ecological validity and avoid the Hawthorne effect, following a precedent set by earlier lung cancer screening trials. This means participants will be unaware of the research nature of the PN intervention or YLST until they attend for a Lung Health Check screening appointment, where fully informed consent will be collected to participate in the YLST. Individual consent will, therefore, not be sought before randomisation to the PN study. However, the opportunity and mechanism for dissent will be clearly described in the advance notification letter sent with details of the scheduled telephone appointment so individuals can opt-out if they do not agree to the use of this information. The opportunity to dissent from the wider YLST was also provided with all previous invitations. This approach was REC and CAG approved, and deemed acceptable by PPI representatives. Patient and public involvement A PPI consultation exercise was undertaken with 12 individuals (5 male/7 female) aged between 50 and 80 years, including individuals with a smoking history who had lived experience of cancer, including lung cancer (n=7), as well as those without (n=5), to gain insight from a broad range of perspectives. The feedback was used to inform the development of the PN intervention, data collection tools and to understand the acceptability of the observational methods used for the process evaluation. All 12 individuals were supportive of the research. Dissemination Study findings will be submitted for publication to relevant peer-reviewed journals in accordance with the Consolidated Standards of Reporting Trials (CONSORT) Statement and presented at conferences. A summary of the results will be provided for participants on the study website. The YLST was approved by the Greater Manchester West Research Ethics Committee (REC) (18-NW-0012) and the Health Research Authority following review by the Confidentiality Advisory Group (CAG). The PN substudy was approved by the REC and CAG as a Substantial Amendment on 5 October 2022. Trial governance is overseen by the trial management group, the independent data monitoring committee and an independent trials steering committee. Full details are provided in the YLST protocol. The YLST and nested PN study follow a service demonstration design to provide a realistic indication of uptake following PN in a real-world clinical context, improve ecological validity and avoid the Hawthorne effect, following a precedent set by earlier lung cancer screening trials. This means participants will be unaware of the research nature of the PN intervention or YLST until they attend for a Lung Health Check screening appointment, where fully informed consent will be collected to participate in the YLST. Individual consent will, therefore, not be sought before randomisation to the PN study. However, the opportunity and mechanism for dissent will be clearly described in the advance notification letter sent with details of the scheduled telephone appointment so individuals can opt-out if they do not agree to the use of this information. The opportunity to dissent from the wider YLST was also provided with all previous invitations. This approach was REC and CAG approved, and deemed acceptable by PPI representatives. A PPI consultation exercise was undertaken with 12 individuals (5 male/7 female) aged between 50 and 80 years, including individuals with a smoking history who had lived experience of cancer, including lung cancer (n=7), as well as those without (n=5), to gain insight from a broad range of perspectives. The feedback was used to inform the development of the PN intervention, data collection tools and to understand the acceptability of the observational methods used for the process evaluation. All 12 individuals were supportive of the research. Study findings will be submitted for publication to relevant peer-reviewed journals in accordance with the Consolidated Standards of Reporting Trials (CONSORT) Statement and presented at conferences. A summary of the results will be provided for participants on the study website. This study is the first in the UK to test whether a novel PN intervention increases uptake of lung cancer screening (Lung Health Checks) among high-risk individuals who have not responded to prior invitations within the YLST. The study will also assess the cost-effectiveness of implementing such an intervention. The mixed-methods process evaluation will provide insights into the barriers preventing people from taking part in screening, and the process through which a PN intervention may work, and its acceptability. If shown to be effective, the intervention could be integrated into the national lung cancer screening programme. This could facilitate participation by those at the highest risk of lung cancer who are otherwise least likely to respond to the screening offer (ie, people who smoke and those experiencing socioeconomic deprivation) to maximise its equity and effectiveness in reducing lung cancer deaths. 10.1136/bmjopen-2024-084577 online supplemental file 1 10.1136/bmjopen-2024-084577 online supplemental file 2 10.1136/bmjopen-2024-084577 online supplemental file 3 10.1136/bmjopen-2024-084577 online supplemental file 4 10.1136/bmjopen-2024-084577 online supplemental file 5 10.1136/bmjopen-2024-084577 online supplemental file 6
Co-Design and Validation of a Family Nursing Educational Intervention in Long-Term Cancer Survivorship Using Expert Judgement
4cf11d26-ff7c-4607-ad58-a15e8cb620d9
9864059
Internal Medicine[mh]
There are currently 32.6 million cancer survivors worldwide . In addition, there is an increasing number of long-term cancer survivors, i.e., individuals who are disease-free at 5 years after their diagnosis and the completion of their treatment, although many face late physical, psychological, and socioeconomic sequelae . Therefore, the health system must accommodate the growing need for the long-term follow-up of survivors—who are considered chronic patients—to promote their well-being and improve their quality of life, to facilitate their return to work, and to enable them to live independently and to reduce their rate of cancer recurrence . In order to improve the health outcomes and promote care that meets the needs of cancer survivors, health professionals should work in an interdisciplinary way for the comprehensive care of these survivors and their families . Although cancer is a family concern , families do not receive the support they need and, consequently, they often experience the same distress that survivors experience . Therefore, a comprehensive healthcare framework that addresses the family as the unit of care in cancer care is recommended . The experience of managing cancer and the problems derived from it are complex, especially when the cancer patient and his or her family are at the center of care . Hence, health professionals must be prepared to offer family-focused care in an interdisciplinary way . In this research, the Calgary Model of Family Assessment and Intervention will be used as the theoretical framework as it is one of the most widely used and has been implemented in nursing education curricula worldwide . New graduates will face health demands related to chronic care, such as that required by cancer survivors and their families. Thus, new graduates should develop specific skills to work as a team and to support, educate, and empower long-term cancer survivors and their families. Studies have highlighted the need to train nursing students in the field of oncology . They have also recommended the use of new educational methodologies in nursing, as well as the use of educational environments and tools such as clinical simulation to provide quality training in a safe context . However, there is a lack of nursing curriculum models that address nursing practices for patients and families living with complex and long-term health processes, such as cancer . In addition, no interdisciplinary educational intervention based on active methodologies to train nurses to provide family nursing care in the context of long-term cancer survivorship has been found. Some family interventions have been conducted in acute care areas such as the one conducted by Eggenberger et al. , where one of the central elements was therapeutic conversation between nurses and families in the intensive care unit, or the one developed by Beierwaltes et al. , which incorporated digital storytelling to implement family nursing practice in acute care settings. However, no educational intervention has been found in family nursing for learning to care for families in cancer survivorship. Therefore, there is a need to develop educational interventions that enhance nursing students’ competencies in order to offer students opportunities to learn to assess and intervene in families who live with and beyond cancer, and to integrate this experience into their future clinical practices . Furthermore, participatory design that involves people in the co-design of learning tools, educational policies, academic curricula, or support innovation processes has been recommended to ensure these works respond to the needs of the learners . Therefore, the aim of this study was to co-design and validate a family nursing educational intervention in long-term cancer survivorship through an expert judgement. 2.1. Design The expert panel method using qualitative focus groups with experts was used in this study to co-design and validate an interdisciplinary educational intervention. This was considered the most appropriate method because there were no useful previous data on which to base the educational intervention . Unlike Delphi studies, whose objective is to reach a consensus on a given topic through the analysis of quantitative data, the expert panel method does not require a consensus to be reached . Its objective is to obtain opinions from specialist experts and other individuals of interest, and the information that emerges from the group dialogue and interaction is used to validate a product, in this case, the methodological and content design of an educational intervention. The panel evaluation of teaching material consisted of asking a group of experts to discuss the advantages and disadvantages of an educational program (learning objectives, methodology, content, and evaluation) and reach a consensus on the best features of an educational program. This evaluation strategy provides a deeper assessment and detailed information on the subject under study . 2.2. Data Collection 2.2.1. Expert Selection Regarding the procedures for selecting the experts, diversity of opinions are found in the literature, ranging from those that do not imply any selection filter—as in the cases of an affinity or closeness between the expert and the researcher—to those that use a series of structured criteria such as the Biogram or the Expert Competence Coefficient . The present study was carried out by affinity with the research group (belonging to different institutions with which the authors had worked previously). As for the number of experts needed to make up the panel, there is no unanimous agreement on its determination . Some authors point out that the number of experts depends on aspects such as the ease of accessing them or the possibility of knowing enough experts on the subject under investigation . Other authors indicate that the number of experts in a panel depends on the level of expertise and the diversity of knowledge . In this study, specialization was taken into account (all had to be working in the field of oncology either in the clinic or university) and knowledge should include both the main health professionals involved in the care of cancer survivors and the survivors and family members themselves. This is in addition to university lecturers specializing in educational methodologies and the students who will receive the intervention. A convenience sample was used to select the participants based on the representation of the different roles and profiles necessary for an educational intervention based on the expertise of the main actors. The panel was comprised of 11 participants from different institutions; of which, 8 were specialist experts (5 healthcare practitioners and 3 academic professors) with scientific knowledge and care experience on the subject under study (long-term cancer survivorship care), 2 were patients/family members, and 1 was a potential intervention recipient (a nursing student at the doorstep of graduation) . The expert panel of participants were recruited in Spain from the areas of primary care, hospitalization, research, and education . 2.2.2. Conducting the Expert Panel The ways of developing expert panels are diverse; in this study, the consensus method was used. As a group and jointly, the participants reached an agreement . In this study, it was necessary for 80% of the panelists to agree. The panel was established in December 2020 and the meetings were held on the 15th and the 22nd of January 2021. Each meeting had a duration of one hour. The expert panel was structured in three consecutive stages. The first stage involved constitution and information, and the second and third involved group meetings for a discussion and drawing conclusions . 2.2.3. Stage 1. Constitution of the Expert Panel, Final Expert Selection, and Information Experts were invited to participate by telephone and email. Following recommendations by Lecours et al. , the email message contained the following information: principal investigator and research team, description of the study, reasons why the expert was selected, procedure to follow to participate in the panel, estimation of the time required (participation in all stages), and confidentiality. All of the experts who were contacted voluntarily agreed to participate in the study, signed the informed consent form, and confirmed their attendance before the first meeting. In response to their acceptance, the experts were sent a file explaining the proposal for the interdisciplinary educational intervention, as well as the link to the videoconference for the first meeting. The file contained the explanation and justification of the study, the composition and characteristics of the expert panel members, and the proposed educational intervention to be validated. 2.2.4. Stage 2. First Expert Panel Meeting The group moderator (principal investigator) thanked all members for their participation and for dedicating their time to the study. She then introduced all of the members and made a brief presentation of the intervention proposal submitted for validation, including the objectives, contents, and methodologies. Next, the experts were asked to provide their inputs regarding the need for the project and its objectives, content, and methodologies, and the discussion began. The first meeting concluded with a brief review of the topics discussed. The moderator also confirmed that minutes of the meeting would be sent out, the date and time of the next meeting was set, and some questions for reflection were provided, focused on helping panelists draw conclusions about the educational intervention. 2.2.5. Stage 3. Second Expert Panel Meeting: Content Validation of the Intervention, Experts’ Opinion, and Proposed Changes The principal investigator read the minutes of the previous meeting, as well as the questions that were sent to the panel members for individual work. Some of them were: what would you like nursing students to know about cancer survivorship and the family care of cancer survivors?; do you consider that the proposed educational intervention helps to acquire competence for cancer survivor and family-focused care?; what kind of content do you suggest for the personal work of the students?; what educational methodologies would you use in the training of students?; what do you think is currently missing in the training of nurses for the care of cancer survivors and their families?; what contents do you suggest for students’ personal work?; and what else do you think nursing professionals need to know?. All the members of the panel gave their inputs. However, because the cancer survivor could not attend the second meeting due to personal reasons, she sent in writing her contributions to share them with the group. After discussing and analyzing all the contributions, a consensus was reached regarding the design of the interdisciplinary educational intervention, which was called “Learning and Care”. 2.3. Data Analysis A thematic analysis was carried out to identify meaningful pieces of qualitative data from the transcriptions of the two meetings with the experts . Thematic analysis is used to explore experiences, perspectives, and practices and to analyze the qualitative data collected from interviews, focus groups, or surveys, among others . In this study, thematic analysis was used to analyze the perspectives and opinions of experts regarding the design and content of an educational intervention for nursing students. The six steps outlined by Braun and Clarke were as follows: (1) one of the authors (MD) read the full data material closely to become familiar with what the data entails, paying specific attention to patterns that occur; (2) through a data reduction, MD started to generate the initial codes by documenting where and how patterns occur; (3) at this stage, two of the researchers (MD and VL) met for an analytic seminar to combine codes into overarching themes; (4) at this stage, the two researchers discussed a coherent recognition of how the themes were patterned; (5) MD and VL defined each theme which emerged and captured how the themes supported the data; and (6) finally, CGV, who had not been part of the analysis process thus far, audited the final thematic structure and its relationship with the identified data from the participants. ( analysis process). 2.4. Rigor The rigor of this study was ensured following recommendations by Doyle et al. . The credibility was ensured following a data analysis developed by three qualitative experts in the research team and was supported by the quotes of the panel experts. Furthermore, the data analysis was guided by reflexivity where previous assumptions were recognized and left in suspension. The reliability was guaranteed presenting a detailed description of the methods. Transferability and confirmability were safeguarded by presenting detailed information about the participants sociodemographic data and the research scenario. 2.5. Ethics Committee Approval The present study, which is part of a larger study, received ethical approval (ref. 2020.161), thus ensuring that it respects the fundamental principles of the Declaration of Helsinki. All participants were aware of the study, voluntary participated, and signed the corresponding informed consent form. In addition, anonymity, confidentiality, and results communication were guaranteed throughout the research. The expert panel method using qualitative focus groups with experts was used in this study to co-design and validate an interdisciplinary educational intervention. This was considered the most appropriate method because there were no useful previous data on which to base the educational intervention . Unlike Delphi studies, whose objective is to reach a consensus on a given topic through the analysis of quantitative data, the expert panel method does not require a consensus to be reached . Its objective is to obtain opinions from specialist experts and other individuals of interest, and the information that emerges from the group dialogue and interaction is used to validate a product, in this case, the methodological and content design of an educational intervention. The panel evaluation of teaching material consisted of asking a group of experts to discuss the advantages and disadvantages of an educational program (learning objectives, methodology, content, and evaluation) and reach a consensus on the best features of an educational program. This evaluation strategy provides a deeper assessment and detailed information on the subject under study . 2.2.1. Expert Selection Regarding the procedures for selecting the experts, diversity of opinions are found in the literature, ranging from those that do not imply any selection filter—as in the cases of an affinity or closeness between the expert and the researcher—to those that use a series of structured criteria such as the Biogram or the Expert Competence Coefficient . The present study was carried out by affinity with the research group (belonging to different institutions with which the authors had worked previously). As for the number of experts needed to make up the panel, there is no unanimous agreement on its determination . Some authors point out that the number of experts depends on aspects such as the ease of accessing them or the possibility of knowing enough experts on the subject under investigation . Other authors indicate that the number of experts in a panel depends on the level of expertise and the diversity of knowledge . In this study, specialization was taken into account (all had to be working in the field of oncology either in the clinic or university) and knowledge should include both the main health professionals involved in the care of cancer survivors and the survivors and family members themselves. This is in addition to university lecturers specializing in educational methodologies and the students who will receive the intervention. A convenience sample was used to select the participants based on the representation of the different roles and profiles necessary for an educational intervention based on the expertise of the main actors. The panel was comprised of 11 participants from different institutions; of which, 8 were specialist experts (5 healthcare practitioners and 3 academic professors) with scientific knowledge and care experience on the subject under study (long-term cancer survivorship care), 2 were patients/family members, and 1 was a potential intervention recipient (a nursing student at the doorstep of graduation) . The expert panel of participants were recruited in Spain from the areas of primary care, hospitalization, research, and education . 2.2.2. Conducting the Expert Panel The ways of developing expert panels are diverse; in this study, the consensus method was used. As a group and jointly, the participants reached an agreement . In this study, it was necessary for 80% of the panelists to agree. The panel was established in December 2020 and the meetings were held on the 15th and the 22nd of January 2021. Each meeting had a duration of one hour. The expert panel was structured in three consecutive stages. The first stage involved constitution and information, and the second and third involved group meetings for a discussion and drawing conclusions . 2.2.3. Stage 1. Constitution of the Expert Panel, Final Expert Selection, and Information Experts were invited to participate by telephone and email. Following recommendations by Lecours et al. , the email message contained the following information: principal investigator and research team, description of the study, reasons why the expert was selected, procedure to follow to participate in the panel, estimation of the time required (participation in all stages), and confidentiality. All of the experts who were contacted voluntarily agreed to participate in the study, signed the informed consent form, and confirmed their attendance before the first meeting. In response to their acceptance, the experts were sent a file explaining the proposal for the interdisciplinary educational intervention, as well as the link to the videoconference for the first meeting. The file contained the explanation and justification of the study, the composition and characteristics of the expert panel members, and the proposed educational intervention to be validated. 2.2.4. Stage 2. First Expert Panel Meeting The group moderator (principal investigator) thanked all members for their participation and for dedicating their time to the study. She then introduced all of the members and made a brief presentation of the intervention proposal submitted for validation, including the objectives, contents, and methodologies. Next, the experts were asked to provide their inputs regarding the need for the project and its objectives, content, and methodologies, and the discussion began. The first meeting concluded with a brief review of the topics discussed. The moderator also confirmed that minutes of the meeting would be sent out, the date and time of the next meeting was set, and some questions for reflection were provided, focused on helping panelists draw conclusions about the educational intervention. 2.2.5. Stage 3. Second Expert Panel Meeting: Content Validation of the Intervention, Experts’ Opinion, and Proposed Changes The principal investigator read the minutes of the previous meeting, as well as the questions that were sent to the panel members for individual work. Some of them were: what would you like nursing students to know about cancer survivorship and the family care of cancer survivors?; do you consider that the proposed educational intervention helps to acquire competence for cancer survivor and family-focused care?; what kind of content do you suggest for the personal work of the students?; what educational methodologies would you use in the training of students?; what do you think is currently missing in the training of nurses for the care of cancer survivors and their families?; what contents do you suggest for students’ personal work?; and what else do you think nursing professionals need to know?. All the members of the panel gave their inputs. However, because the cancer survivor could not attend the second meeting due to personal reasons, she sent in writing her contributions to share them with the group. After discussing and analyzing all the contributions, a consensus was reached regarding the design of the interdisciplinary educational intervention, which was called “Learning and Care”. Regarding the procedures for selecting the experts, diversity of opinions are found in the literature, ranging from those that do not imply any selection filter—as in the cases of an affinity or closeness between the expert and the researcher—to those that use a series of structured criteria such as the Biogram or the Expert Competence Coefficient . The present study was carried out by affinity with the research group (belonging to different institutions with which the authors had worked previously). As for the number of experts needed to make up the panel, there is no unanimous agreement on its determination . Some authors point out that the number of experts depends on aspects such as the ease of accessing them or the possibility of knowing enough experts on the subject under investigation . Other authors indicate that the number of experts in a panel depends on the level of expertise and the diversity of knowledge . In this study, specialization was taken into account (all had to be working in the field of oncology either in the clinic or university) and knowledge should include both the main health professionals involved in the care of cancer survivors and the survivors and family members themselves. This is in addition to university lecturers specializing in educational methodologies and the students who will receive the intervention. A convenience sample was used to select the participants based on the representation of the different roles and profiles necessary for an educational intervention based on the expertise of the main actors. The panel was comprised of 11 participants from different institutions; of which, 8 were specialist experts (5 healthcare practitioners and 3 academic professors) with scientific knowledge and care experience on the subject under study (long-term cancer survivorship care), 2 were patients/family members, and 1 was a potential intervention recipient (a nursing student at the doorstep of graduation) . The expert panel of participants were recruited in Spain from the areas of primary care, hospitalization, research, and education . The ways of developing expert panels are diverse; in this study, the consensus method was used. As a group and jointly, the participants reached an agreement . In this study, it was necessary for 80% of the panelists to agree. The panel was established in December 2020 and the meetings were held on the 15th and the 22nd of January 2021. Each meeting had a duration of one hour. The expert panel was structured in three consecutive stages. The first stage involved constitution and information, and the second and third involved group meetings for a discussion and drawing conclusions . Experts were invited to participate by telephone and email. Following recommendations by Lecours et al. , the email message contained the following information: principal investigator and research team, description of the study, reasons why the expert was selected, procedure to follow to participate in the panel, estimation of the time required (participation in all stages), and confidentiality. All of the experts who were contacted voluntarily agreed to participate in the study, signed the informed consent form, and confirmed their attendance before the first meeting. In response to their acceptance, the experts were sent a file explaining the proposal for the interdisciplinary educational intervention, as well as the link to the videoconference for the first meeting. The file contained the explanation and justification of the study, the composition and characteristics of the expert panel members, and the proposed educational intervention to be validated. The group moderator (principal investigator) thanked all members for their participation and for dedicating their time to the study. She then introduced all of the members and made a brief presentation of the intervention proposal submitted for validation, including the objectives, contents, and methodologies. Next, the experts were asked to provide their inputs regarding the need for the project and its objectives, content, and methodologies, and the discussion began. The first meeting concluded with a brief review of the topics discussed. The moderator also confirmed that minutes of the meeting would be sent out, the date and time of the next meeting was set, and some questions for reflection were provided, focused on helping panelists draw conclusions about the educational intervention. The principal investigator read the minutes of the previous meeting, as well as the questions that were sent to the panel members for individual work. Some of them were: what would you like nursing students to know about cancer survivorship and the family care of cancer survivors?; do you consider that the proposed educational intervention helps to acquire competence for cancer survivor and family-focused care?; what kind of content do you suggest for the personal work of the students?; what educational methodologies would you use in the training of students?; what do you think is currently missing in the training of nurses for the care of cancer survivors and their families?; what contents do you suggest for students’ personal work?; and what else do you think nursing professionals need to know?. All the members of the panel gave their inputs. However, because the cancer survivor could not attend the second meeting due to personal reasons, she sent in writing her contributions to share them with the group. After discussing and analyzing all the contributions, a consensus was reached regarding the design of the interdisciplinary educational intervention, which was called “Learning and Care”. A thematic analysis was carried out to identify meaningful pieces of qualitative data from the transcriptions of the two meetings with the experts . Thematic analysis is used to explore experiences, perspectives, and practices and to analyze the qualitative data collected from interviews, focus groups, or surveys, among others . In this study, thematic analysis was used to analyze the perspectives and opinions of experts regarding the design and content of an educational intervention for nursing students. The six steps outlined by Braun and Clarke were as follows: (1) one of the authors (MD) read the full data material closely to become familiar with what the data entails, paying specific attention to patterns that occur; (2) through a data reduction, MD started to generate the initial codes by documenting where and how patterns occur; (3) at this stage, two of the researchers (MD and VL) met for an analytic seminar to combine codes into overarching themes; (4) at this stage, the two researchers discussed a coherent recognition of how the themes were patterned; (5) MD and VL defined each theme which emerged and captured how the themes supported the data; and (6) finally, CGV, who had not been part of the analysis process thus far, audited the final thematic structure and its relationship with the identified data from the participants. ( analysis process). The rigor of this study was ensured following recommendations by Doyle et al. . The credibility was ensured following a data analysis developed by three qualitative experts in the research team and was supported by the quotes of the panel experts. Furthermore, the data analysis was guided by reflexivity where previous assumptions were recognized and left in suspension. The reliability was guaranteed presenting a detailed description of the methods. Transferability and confirmability were safeguarded by presenting detailed information about the participants sociodemographic data and the research scenario. The present study, which is part of a larger study, received ethical approval (ref. 2020.161), thus ensuring that it respects the fundamental principles of the Declaration of Helsinki. All participants were aware of the study, voluntary participated, and signed the corresponding informed consent form. In addition, anonymity, confidentiality, and results communication were guaranteed throughout the research. The results are structured according to the three main themes that emerged from the collected data. 3.1. Experts’ Opinions Regarding the Content in the Educational Intervention To acquire competence (knowledge, skills, and attitudes) in family-focused care for long-term cancer survivorship, the following learning objectives were proposed: Understand the needs of long-term cancer survivors and their families. Know the characteristics of the family interview according to the Calgary Family Assessment and Intervention Model . Acquire the ability to conduct a family interview according to the Calgary Model. Encourage an attitude of care focused on the cancer survivor and his or her family. Encourage interdisciplinary work that promotes family-focused care in cancer survivorship. Once the objectives were presented and agreed upon, the contents to be taught during the educational intervention were discussed. Additionally, different educational methods for covering all of the competency dimensions were identified. One of the proposed contents was related to the interdisciplinary work, as stated by the oncologist: “ I think it is important to address interdisciplinary work in a round table and how it affects the care of survivors and their families….At the end of the round table, before leaving the classroom, students should be able to answer some questions with their phone or electronic device to encourage them to reflect on what they have heard, mainly about interdisciplinary work.” On the other hand, the psycho-oncologist indicated the importance of including new content so that the students would be aware of the fear of recurrence that survivors and relatives have and stated: “ Patients and family members are afraid of recurrence. Therefore, this concept (fear of recurrence of cancer) must be present in the intervention and the students must learn to give patients and relatives realistic hope.” Subsequently, to understand the importance of interdisciplinarity and to delve deeper into the needs and experiences of cancer survivors and their families, the need for an exchange of experiences among survivors, family members, health professionals, and students was agreed upon. Finally, to integrate the acquisition of the full competence, the experts proposed the inclusion of a clinical simulation in which the learners could experience a “real” assessment and care situation with cancer survivors and their family. 3.2. Combination of Innovative Teaching Methods The expert panel considered it appropriate and relevant to use a combination of innovative methods in the educational intervention to deliver the content appropriately, including the following three methods: a flipped classroom, round table, and clinical simulation. The flipped classroom—which was recommended by the oncologist for the acquisition of knowledge— enables learners to be leaders of their learning to facilitate their clinical reasoning and their critical thinking skills . The expert panel suggested that the contents should be dynamic, clear, and brief, and should be delivered to the learners via videos, TEDx conferences, and research articles. It was agreed that the flipped classroom would be taught by a clinical nurse with knowledge and skills in conducting family interviews in nursing practice. Additionally, it was suggested by the nurse practitioner that the class should use didactic tools, such as Kahoot, role playing, and group dynamics, among others. The round table format—which was recommended to address the nurses’ attitudes—seeks to emphasize the interdisciplinary work and how it affects the care of survivors and their families. The round table would be composed of an oncologist, an oncology nurse, a long-term cancer survivor, and a family member of the survivors who would narrate their experiences. Furthermore, as stated by the nurse researcher “ It is good to allocate a long time to the round table to facilitate questions from the students to the speakers ”. This statement, which was agreed upon by all the members of the expert panel, was incorporated into the educational intervention. It was also suggested to send to the cell phones of students some questions to promote their personal reflection on the topics addressed in the round table. The clinical simulation—which was recommended for the development of skills—would consist of a family interview and a therapeutic conversation with a long-term cancer survivor and his/her family member. This clinical simulation would bring students and new graduates closer to contexts similar to those that they will encounter in practice and allow them to apply their knowledge, acquire desired attitudes, and develop their skills. In addition, as stated by the student: “ Simulation is something we students really enjoy because it gives us the opportunity to practice before going to clinical practice” . Even though all the panelists agreed on the importance of clinical simulation, it was the most controversial methodology among the experts. Debate was opened on whether the clinical scenario should be carried out with real or standardized patients/relatives and whether it should take place in person or via a videoconference (due to the COVID-19 pandemic). It was ultimately considered appropriate to use standardized patients and promote the involvement of the student in a clinical simulation to build interpersonal and face-to-face relationships between students and survivors/family members. The combination of different educational methods and the content of each one was well accepted by the expert panel, reaching a consensus of more than 80% of the participants. According to the psycho-oncologist who was supported by all experts, “ the educational intervention is very well designed, very complete and contributes to the acquisition of skills to care for cancer survivors and their families ”. The student also stated: “ I am interested in having different activities and methods because I believe that each one brings something different to the table ”. Finally, the survivor and the family member highlighted that “ the direct involvement of survivors and family members in nursing education is a positive factor”. The educational intervention validated by the expert panel is outlined in . 3.3. Need of Education in Long-Term Cancer Survivorship During the first meeting, the expert panel unanimously expressed the need to train students in the care of cancer survivors and their families. The survivor considered this “very positive for the quality of life of cancer survivors, that nursing is present in the stage of cancer survivorship as much as it is during treatment”. The survivor reaffirmed that she considered it important not only for her and her family but also for “the economic repercussion that this has for society due to the frequency of sick leave or even partial disability that this situation may result in, which perhaps, with nursing care, could be reduced”. Similarly, the oncologist, psycho-oncologist, pharmacist, and nurses expressed “ the need to train nurses to be able to carry out a long-term follow-up similar to that performed during the active phase of cancer treatment ”. The family member agreed on the importance of education in the area of cancer survivorship, although she argued that “ No one has cared about me as a family member ”. Finally, the four-year nursing student who was close to graduating considered that “ I still need to know more about cancer survivorship and family-focused care. It is discussed in class (in theory), but I have not had the opportunity to learn about it in practice (in a clinical simulation) ”. Thus, all the experts agreed on the importance of nursing education in long-term cancer survivorship with a family-focused approach. To acquire competence (knowledge, skills, and attitudes) in family-focused care for long-term cancer survivorship, the following learning objectives were proposed: Understand the needs of long-term cancer survivors and their families. Know the characteristics of the family interview according to the Calgary Family Assessment and Intervention Model . Acquire the ability to conduct a family interview according to the Calgary Model. Encourage an attitude of care focused on the cancer survivor and his or her family. Encourage interdisciplinary work that promotes family-focused care in cancer survivorship. Once the objectives were presented and agreed upon, the contents to be taught during the educational intervention were discussed. Additionally, different educational methods for covering all of the competency dimensions were identified. One of the proposed contents was related to the interdisciplinary work, as stated by the oncologist: “ I think it is important to address interdisciplinary work in a round table and how it affects the care of survivors and their families….At the end of the round table, before leaving the classroom, students should be able to answer some questions with their phone or electronic device to encourage them to reflect on what they have heard, mainly about interdisciplinary work.” On the other hand, the psycho-oncologist indicated the importance of including new content so that the students would be aware of the fear of recurrence that survivors and relatives have and stated: “ Patients and family members are afraid of recurrence. Therefore, this concept (fear of recurrence of cancer) must be present in the intervention and the students must learn to give patients and relatives realistic hope.” Subsequently, to understand the importance of interdisciplinarity and to delve deeper into the needs and experiences of cancer survivors and their families, the need for an exchange of experiences among survivors, family members, health professionals, and students was agreed upon. Finally, to integrate the acquisition of the full competence, the experts proposed the inclusion of a clinical simulation in which the learners could experience a “real” assessment and care situation with cancer survivors and their family. The expert panel considered it appropriate and relevant to use a combination of innovative methods in the educational intervention to deliver the content appropriately, including the following three methods: a flipped classroom, round table, and clinical simulation. The flipped classroom—which was recommended by the oncologist for the acquisition of knowledge— enables learners to be leaders of their learning to facilitate their clinical reasoning and their critical thinking skills . The expert panel suggested that the contents should be dynamic, clear, and brief, and should be delivered to the learners via videos, TEDx conferences, and research articles. It was agreed that the flipped classroom would be taught by a clinical nurse with knowledge and skills in conducting family interviews in nursing practice. Additionally, it was suggested by the nurse practitioner that the class should use didactic tools, such as Kahoot, role playing, and group dynamics, among others. The round table format—which was recommended to address the nurses’ attitudes—seeks to emphasize the interdisciplinary work and how it affects the care of survivors and their families. The round table would be composed of an oncologist, an oncology nurse, a long-term cancer survivor, and a family member of the survivors who would narrate their experiences. Furthermore, as stated by the nurse researcher “ It is good to allocate a long time to the round table to facilitate questions from the students to the speakers ”. This statement, which was agreed upon by all the members of the expert panel, was incorporated into the educational intervention. It was also suggested to send to the cell phones of students some questions to promote their personal reflection on the topics addressed in the round table. The clinical simulation—which was recommended for the development of skills—would consist of a family interview and a therapeutic conversation with a long-term cancer survivor and his/her family member. This clinical simulation would bring students and new graduates closer to contexts similar to those that they will encounter in practice and allow them to apply their knowledge, acquire desired attitudes, and develop their skills. In addition, as stated by the student: “ Simulation is something we students really enjoy because it gives us the opportunity to practice before going to clinical practice” . Even though all the panelists agreed on the importance of clinical simulation, it was the most controversial methodology among the experts. Debate was opened on whether the clinical scenario should be carried out with real or standardized patients/relatives and whether it should take place in person or via a videoconference (due to the COVID-19 pandemic). It was ultimately considered appropriate to use standardized patients and promote the involvement of the student in a clinical simulation to build interpersonal and face-to-face relationships between students and survivors/family members. The combination of different educational methods and the content of each one was well accepted by the expert panel, reaching a consensus of more than 80% of the participants. According to the psycho-oncologist who was supported by all experts, “ the educational intervention is very well designed, very complete and contributes to the acquisition of skills to care for cancer survivors and their families ”. The student also stated: “ I am interested in having different activities and methods because I believe that each one brings something different to the table ”. Finally, the survivor and the family member highlighted that “ the direct involvement of survivors and family members in nursing education is a positive factor”. The educational intervention validated by the expert panel is outlined in . During the first meeting, the expert panel unanimously expressed the need to train students in the care of cancer survivors and their families. The survivor considered this “very positive for the quality of life of cancer survivors, that nursing is present in the stage of cancer survivorship as much as it is during treatment”. The survivor reaffirmed that she considered it important not only for her and her family but also for “the economic repercussion that this has for society due to the frequency of sick leave or even partial disability that this situation may result in, which perhaps, with nursing care, could be reduced”. Similarly, the oncologist, psycho-oncologist, pharmacist, and nurses expressed “ the need to train nurses to be able to carry out a long-term follow-up similar to that performed during the active phase of cancer treatment ”. The family member agreed on the importance of education in the area of cancer survivorship, although she argued that “ No one has cared about me as a family member ”. Finally, the four-year nursing student who was close to graduating considered that “ I still need to know more about cancer survivorship and family-focused care. It is discussed in class (in theory), but I have not had the opportunity to learn about it in practice (in a clinical simulation) ”. Thus, all the experts agreed on the importance of nursing education in long-term cancer survivorship with a family-focused approach. This study presents the characteristics and benefits provided by the collaboration of an interdisciplinary panel of experts to co-design and validate an educational intervention on long-term cancer survivorship for nursing. The expert judgement method has been widely used to validate research tools and educational interventions that need to be carried out rigorously and have not been done before . It is considered to be of importance to establish the validity of the content in educational interventions in the area of health, since they will influence the quality of the learning outcomes . Nursing education on long-term cancer survivorship is emerging, so it was considered appropriate to validate a new interdisciplinary educational intervention before implementing it. Involving patients and family members in education brings benefits such as bridging the gap between theory and clinical practice . Regarding this study, a consensus was used to develop the panel, which was reached without problems (always between 50% and 80%). This consensus may be due to the variety of panelists, which guaranteed complementary points of view . In addition, all experts highlighted the importance and need for nursing education in cancer survivorship. This need was also identified by the Institute of Medicine (2006) in the report “ From cancer patient to cancer survivor: Lost in transition” , which noted that cancer survivorship care should be included in the content of continuing health education, including education for nurses, physicians, rehabilitation specialists, and psychosocial and mental health professionals. Along these lines, Klemp et al. stated that the majority of undergraduate nursing students receive little or no education to meet the needs of cancer survivors. Dietmann also noted that although the number of people who survive cancer continues to increase, and the short- and long-term effects of cancer and its treatment result in physical, psychosocial, and spiritual needs, this content has not been addressed in nursing curricula. Therefore, we consider it essential to conduct future studies that demonstrate the effectiveness of nursing education to improve care for long-term cancer survivors. The panel of experts provided creative multidisciplinary perspectives and allowed for the credibility, future acceptability, and application of the educational intervention, as stated by Dinessen et al. . The panelists considered it necessary for the intervention to be taught interdisciplinarily and with different educational methods and dynamic content. To provide quality care in oncology, professionals must work collaboratively and make joint decisions . Additionally, interdisciplinary education helps to increase the quality of care by improving nurses’ attitudes and perceptions regarding other healthcare professions . Using complementary active educational methods was also found to be beneficial to acquire the different dimensions of competency (knowledge, skills, and attitude) to provide comprehensive care for long-term cancer survivors and their families. This was also highlighted by Dietmann who indicated the importance of implementing active teaching methods for cancer education in undergraduate and graduate nursing curricula. This study has strengths and limitations. The heterogeneous composition of the expert panel stands out as a strength: it encompassed experts from various disciplines who work together to provide comprehensive cancer care. Furthermore, including a cancer survivor and a family member (recipients of care) and a senior nursing student was positive to develop a more realistic educational intervention to be implemented. However, it should be noted that the educational intervention may be brief to ensure the achievement of the full competence to care for long-term cancer survivors. More education is required to develop sufficient skills and knowledge to address the specific needs experienced by long-term cancer survivors and their families. This study describes the process of an expert panel to co-design an educational intervention in the long-term cancer survivorship for nurses. The educational intervention will be interdisciplinary and will use three different active educational methodologies: a flipped classroom, roundtable, and clinical simulation. It will bring students closer to the needs of long-term cancer survivors and their families, help them become aware of their own learning needs, and train them in the assessment and care of these individuals through family interviewing and therapeutic conversation. This work emanates from clinical practice, the unmet needs of cancer survivors and their families, and the need for teamwork in oncology. Finally, it will help to advance the education of future nursing professionals and therefore improve their clinical practice.
Long‐Term Clinical Outcomes of Paediatric Kidney Transplantation in Hong Kong—A Territory‐Wide Study
6d6f5653-dec4-4f73-badc-99844c6c165f
11865298
Surgical Procedures, Operative[mh]
Introduction Kidney failure is an uncommon childhood condition with associated significant morbidity and mortality compared to the healthy paediatric population . While there are considerable geographical variations in the incidence of kidney failure, we previously reported a local kidney failure incidence of 6.3 per million age‐related population with an increasing trend over the last two decades . Kidney transplant is the treatment of choice for patients with kidney failure in both paediatric and adult populations. Indeed, there was a 12‐fold higher risk of developing mortality among patients who did not receive a kidney transplant . While there is a growing body of literature on the short‐ and medium‐term outcomes of kidney transplant in children in China , longer‐term post‐transplant outcomes such as patient and graft survivals, infection, rejection, and growth remain to be limited. The aim of this study was to provide a comprehensive evaluation on the outcomes of kidney transplants in Hong Kong, exploring not only the graft survival but also complications and growth of our transplant recipients. Methods 2.1 Study Design We conducted a single‐centre retrospective cohort study on patients who received kidney transplants from January 2009 to December 2020 and were managed at the Paediatric Nephrology Centre of Hong Kong Children's Hospital. Our centre was formerly located at Princess Margaret Hospital, Hong Kong. The Paediatric Nephrology Centre is the designated referral centre for complicated kidney disease, chronic dialysis and kidney transplant for children in Hong Kong. All patients under 21 years at the time of transplant with a minimum of 2 years' follow‐up data were included for analyses. Data including patients' demographics, clinical presentations, laboratory findings, treatment and outcomes were retrospectively obtained from electronic medical records till December 2022. Regarding the follow‐up schedule, patients were seen at 1–2 weeks' interval in the first 2 months, then every 4–8 weeks afterwards. In each follow‐up visit, complete blood picture, liver and kidney biochemistry, 12‐h trough tacrolimus (TAC) levels, growth parameters, drug compliance, and clinically significant events such as infections were assessed. Patients' height percentiles and Z scores of our patients' height were generated at the time of transplant and the last review date. Underweight and overweight/obesity in children below 18 years of age was defined as having age‐gender‐specific BMI percentile < 5th and ≥ 85th respectively. For recipients reaching adulthood, underweight was defined as BMI < 18.5 and overweight was defined as BMI ≥ 23. Glycaemic and lipid profiles were evaluated yearly. Post‐transplant diabetes mellitus (PTDM) was diagnosed with a 2‐h glucose level greater than 11.1 mmol/L by oral glucose tolerance test (OGTT). The study was approved by the Institution Research Ethics Review Board of the Hong Kong Children's Hospital, Hospital Authority (HKCH‐REC‐2020‐011). 2.2 Treatment—Immunosuppressive Regimens All kidney transplant recipients (KTR) received protocolised immunosuppressive treatments. Standard immunosuppressants comprised corticosteroids, tacrolimus (TAC) and mycophenolate mofetil (MMF). Anti‐IL2R induction was prescribed at the physician's discretion based on immunological risks such as the number and type of HLA mismatches between donor and recipient. All KTRs received a pulse of methylprednisolone (600 mg/m 2 ) followed by oral prednisolone(2 mg/kg/day) or equivalent, which was gradually tapered to 5 mg/m 2 /day over a course of 6 months. We targeted the TAC 12‐h trough level at 8–12, 7–10, and 5–8 ng/mL, respectively at week 1–4, 5–16, and beyond 16 weeks. MMF was commenced at 600 mg/m 2 /dose twice daily, then tapered to half by week four. Alternatively, the immunosuppressive agents were substituted with cyclosporin and/or azathioprine in selected patients. Anti‐thymocyte globulin (ATG) induction was given to only one patient who received a second transplant. 2.3 Prophylaxis for Infective Complications All patients were given co‐trimoxazole prophylaxis for pneumocystis jiroveci pneumonia (PJP) in the first year post‐transplant unless contraindicated, while patients with G6PD deficiency were offered monthly pentamidine inhalation. Cytomegalovirus (CMV) prophylaxis was given according to the risk of CMV reactivation as follows: universal valganciclovir was given for 6 months in high‐risk patients (i.e., donor‐positive [D+]/recipient‐negative [R−]). CMV prophylaxis for intermediate‐risk (i.e., [D+ or D− to R+] or low‐risk [D−/R−]) was administered as per physicians' discretion. Regular surveillance of serum CMV PCR, Epstein–Barr virus (EBV) PCR, and BK virus PCR was regularly performed during the first‐year post‐transplant and then yearly thereafter. 2.4 Outcome Measures The primary outcomes were the patient and graft survivals at 1, 5, and 7 years after kidney transplant. Graft failure was defined as the need of dialysis. Early graft loss (EGL) was defined as the loss of graft function within the first 30 days of transplant. Graft survival was censored at death or the last follow‐up. Secondary outcomes included complications after transplant, namely, eGFR, rejection, infection, body height, and diabetes upon the last follow‐up. Transplant waiting time was defined as the duration of pretransplant dialysis. Acute tubular necrosis (ATN) referred to the requirement of dialysis in the first week after transplant . In our centre, we performed graft biopsy by indications, and mostly for graft dysfunction. Cellular rejections or antibody‐mediated rejections (ABMR) were histologically biopsy‐proven and classified according to BANFF Classification of Allograft Pathology. Early rejections referred to those that occurred within the first year after transplant . T‐cell mediated rejection was managed with three pulses of methylprednisolone (10 mg/kg) while patients with ABMR were treated with combinations of IVIG (total 2 g/kg), rituximab (375 mg/m 2 ) and plasmapheresis (4–6 sessions of 1–1.5 plasma volume) at the physicians' discretion. Self‐reported drug adherence (i.e., having taken more than 80% of the prescribed medication) was also documented during clinic visits . Estimated glomerular filtration rate (eGFR) was calculated using the modified Schwartz formula for children below 18 years of age. For patients aged 18 or above, their eGFR was generated with the average of Schwartz and the CKD‐EPI formula . The slope of eGFR decline was calculated based on the assumption that it was a linear regression, using the eGFR at the last review date and at 1‐year post‐transplant when presumably a steady state had been achieved . CMV syndrome was defined as CMV viraemia with symptoms such as fever, leucopenia, thrombocytopenia, or raised liver enzymes. BK nephropathy (BKVN) was documented with biopsy‐proven histology results. 2.5 Statistical Analysis and Ethical Consideration In this study, statistical analysis was performed by IBM SPSS statistics version 29 software, and a two‐tailed p ‐value of less than 0.05 was considered statistically significant. The characteristics of the patients were examined by descriptive statistics. Categorical variables were compared using chi‐square or Fisher's exact tests where appropriate. Continuous variables were analysed by Student's T ‐tests or Mann–Whitney U tests where appropriate. Graft survival rates were estimated by the Kaplan–Meier survival analysis, and log‐rank tests were applied to compare any significant difference in survival rates between different groups. A mixed‐design ANOVA was adopted to explore how the trend of certain continuous variables would differ as a function of particular nominal categories. Study Design We conducted a single‐centre retrospective cohort study on patients who received kidney transplants from January 2009 to December 2020 and were managed at the Paediatric Nephrology Centre of Hong Kong Children's Hospital. Our centre was formerly located at Princess Margaret Hospital, Hong Kong. The Paediatric Nephrology Centre is the designated referral centre for complicated kidney disease, chronic dialysis and kidney transplant for children in Hong Kong. All patients under 21 years at the time of transplant with a minimum of 2 years' follow‐up data were included for analyses. Data including patients' demographics, clinical presentations, laboratory findings, treatment and outcomes were retrospectively obtained from electronic medical records till December 2022. Regarding the follow‐up schedule, patients were seen at 1–2 weeks' interval in the first 2 months, then every 4–8 weeks afterwards. In each follow‐up visit, complete blood picture, liver and kidney biochemistry, 12‐h trough tacrolimus (TAC) levels, growth parameters, drug compliance, and clinically significant events such as infections were assessed. Patients' height percentiles and Z scores of our patients' height were generated at the time of transplant and the last review date. Underweight and overweight/obesity in children below 18 years of age was defined as having age‐gender‐specific BMI percentile < 5th and ≥ 85th respectively. For recipients reaching adulthood, underweight was defined as BMI < 18.5 and overweight was defined as BMI ≥ 23. Glycaemic and lipid profiles were evaluated yearly. Post‐transplant diabetes mellitus (PTDM) was diagnosed with a 2‐h glucose level greater than 11.1 mmol/L by oral glucose tolerance test (OGTT). The study was approved by the Institution Research Ethics Review Board of the Hong Kong Children's Hospital, Hospital Authority (HKCH‐REC‐2020‐011). Treatment—Immunosuppressive Regimens All kidney transplant recipients (KTR) received protocolised immunosuppressive treatments. Standard immunosuppressants comprised corticosteroids, tacrolimus (TAC) and mycophenolate mofetil (MMF). Anti‐IL2R induction was prescribed at the physician's discretion based on immunological risks such as the number and type of HLA mismatches between donor and recipient. All KTRs received a pulse of methylprednisolone (600 mg/m 2 ) followed by oral prednisolone(2 mg/kg/day) or equivalent, which was gradually tapered to 5 mg/m 2 /day over a course of 6 months. We targeted the TAC 12‐h trough level at 8–12, 7–10, and 5–8 ng/mL, respectively at week 1–4, 5–16, and beyond 16 weeks. MMF was commenced at 600 mg/m 2 /dose twice daily, then tapered to half by week four. Alternatively, the immunosuppressive agents were substituted with cyclosporin and/or azathioprine in selected patients. Anti‐thymocyte globulin (ATG) induction was given to only one patient who received a second transplant. Prophylaxis for Infective Complications All patients were given co‐trimoxazole prophylaxis for pneumocystis jiroveci pneumonia (PJP) in the first year post‐transplant unless contraindicated, while patients with G6PD deficiency were offered monthly pentamidine inhalation. Cytomegalovirus (CMV) prophylaxis was given according to the risk of CMV reactivation as follows: universal valganciclovir was given for 6 months in high‐risk patients (i.e., donor‐positive [D+]/recipient‐negative [R−]). CMV prophylaxis for intermediate‐risk (i.e., [D+ or D− to R+] or low‐risk [D−/R−]) was administered as per physicians' discretion. Regular surveillance of serum CMV PCR, Epstein–Barr virus (EBV) PCR, and BK virus PCR was regularly performed during the first‐year post‐transplant and then yearly thereafter. Outcome Measures The primary outcomes were the patient and graft survivals at 1, 5, and 7 years after kidney transplant. Graft failure was defined as the need of dialysis. Early graft loss (EGL) was defined as the loss of graft function within the first 30 days of transplant. Graft survival was censored at death or the last follow‐up. Secondary outcomes included complications after transplant, namely, eGFR, rejection, infection, body height, and diabetes upon the last follow‐up. Transplant waiting time was defined as the duration of pretransplant dialysis. Acute tubular necrosis (ATN) referred to the requirement of dialysis in the first week after transplant . In our centre, we performed graft biopsy by indications, and mostly for graft dysfunction. Cellular rejections or antibody‐mediated rejections (ABMR) were histologically biopsy‐proven and classified according to BANFF Classification of Allograft Pathology. Early rejections referred to those that occurred within the first year after transplant . T‐cell mediated rejection was managed with three pulses of methylprednisolone (10 mg/kg) while patients with ABMR were treated with combinations of IVIG (total 2 g/kg), rituximab (375 mg/m 2 ) and plasmapheresis (4–6 sessions of 1–1.5 plasma volume) at the physicians' discretion. Self‐reported drug adherence (i.e., having taken more than 80% of the prescribed medication) was also documented during clinic visits . Estimated glomerular filtration rate (eGFR) was calculated using the modified Schwartz formula for children below 18 years of age. For patients aged 18 or above, their eGFR was generated with the average of Schwartz and the CKD‐EPI formula . The slope of eGFR decline was calculated based on the assumption that it was a linear regression, using the eGFR at the last review date and at 1‐year post‐transplant when presumably a steady state had been achieved . CMV syndrome was defined as CMV viraemia with symptoms such as fever, leucopenia, thrombocytopenia, or raised liver enzymes. BK nephropathy (BKVN) was documented with biopsy‐proven histology results. Statistical Analysis and Ethical Consideration In this study, statistical analysis was performed by IBM SPSS statistics version 29 software, and a two‐tailed p ‐value of less than 0.05 was considered statistically significant. The characteristics of the patients were examined by descriptive statistics. Categorical variables were compared using chi‐square or Fisher's exact tests where appropriate. Continuous variables were analysed by Student's T ‐tests or Mann–Whitney U tests where appropriate. Graft survival rates were estimated by the Kaplan–Meier survival analysis, and log‐rank tests were applied to compare any significant difference in survival rates between different groups. A mixed‐design ANOVA was adopted to explore how the trend of certain continuous variables would differ as a function of particular nominal categories. Results 3.1 Patient Characteristics A total of 61 KTRs (57.4% male; median age at kidney transplant 13.0 years, IQR: 8.9–17.8) were included in the study. The median follow‐up time after transplant was 6.4 years (IQR 4.3–9.6) and the median age at evaluation was 21.9 years (IQR 16.5–26.1). Three patients were lost to follow up due to emigration. Fifty‐five (90.2%) received deceased donor transplant (DDT) whereas six (9.8%) had living‐related transplant (LRT). The leading cause of kidney failure in this cohort was congenital anomalies of kidney and urinary tract (CAKUT) (34.4%), followed by glomerular causes (21.3%). Hereditary/familial nephropathies accounted for 13.1% of the KTRs (Table ). Peritoneal dialysis (PD) was the major modality of kidney replacement therapy (KRT) prior to kidney transplant (65.6%). The median waiting time for DDT and LRT was 2.4 years (IQR 1.0–3.9) and 0.5 years (IQR 0–4.2), respectively. More than two‐thirds of our DDT recipients had > 3 HLA mismatches, while 20% had six‐antigen mismatches. The median donor age for DDT and LRT was similar at 40 (IQR 21–48.5) and 46.5 (IQR 43.75–47) respectively. 3.2 Clinical Outcomes 3.2.1 Patient Survival Four patients died during the study period of 478.4 patient‐years, corresponding to an estimated crude mortality rate of 8.4 per 1000 patient‐years. The overall patient survival was 100%, 96.4%, and 96.4% at 1‐, 5‐, and 7‐years, respectively. The causes of death were infection ( n = 3) and acute T‐lymphoblastic leukaemia ( n = 1) (Table ). One lost his kidney graft due to medication non‐adherence and succumbed 4.3 years later due to severe peritonitis. The remaining three patients died with a functional graft. 3.2.2 Graft Survival There were eight graft losses during the study period (Table ). Two patients (25%) had EGL within the first month: one due to vascular thrombosis and the other due to recurrent FSGS. Rejection and chronic allograft nephropathy accounted for the rest of the graft losses. The median time‐to‐graft loss was 5.8 years (IQR 0.75–8.2). The overall death‐censored graft survival was 95.1%, 95.1%, and 89.9% at 1‐, 5‐, and 7‐years, respectively. Graft survival rate for DDT at 1, 5, and 7 years was 94.5%, 94.5%, and 89%, whereas the graft survival rate for LRT was 100% at 1, 5, and 7 years (Table ). Upon Kaplan–Meier analyses, donor age ≥ 35 years and the development of circulating DSA with ABMR were associated with poor graft survival (log‐rank test p < 0.05 for all; Figure ). Poor drug compliance was also marginally significantly associated with worse graft survival ( p = 0.056; Figure ). The recipients' gender, age, source of kidney (LRT vs. DDT), CMV viraemia or syndrome, BK viraemia and BK nephropathy were not associated with graft survival in this cohort (all p s > 0.10). Among the 53 patients with a functioning graft at last follow‐up, the overall mean rate of eGFR decline during the study period was 0.5 mL/min/1.73 m 2 per year. As for the 40 patients (65.6%) who had a functioning graft at 5 years, 15% had an eGFR < 30 mL/min/1.73 m 2 . 3.2.3 Short‐ and Long‐Term Kidney Allograft Outcomes Seven patients (11.5%) had delayed graft function (DGF) after kidney transplant. A total of 33 rejection episodes occurred in 19 KTRs (31.1%) (Table ). 52.6% (10/19) of the first rejections occurred within the first year of transplant, all of which (10/10) were T‐cell mediated rejection diagnosed at a median of 117.5 days (IQR, 73.5–176.3) after kidney transplant. 21.2% (7/33) rejection episodes were ABMR with a median onset of 750 days (IQR, 458–1252.5). Among those with a functioning graft at the last review, KTRs with any episodes of rejection had a lower median eGFR (40.8 mL/min/1.73 m 2 , IQR 22.8–49.6) compared to those who did not experience any rejection episodes (68.0 mL/min/1.73 m 2 , IQR 51.5–77.5). The proportion of KTRs with medication non‐adherence was significantly higher in those with rejection compared with patients who did not develop a rejection (47.3% vs. 7.1%, RR 3.68, p < 0.001). Histology features of calcineurin‐inhibitor toxicity were also detected in 12 KTRs (19.7%). 3.2.4 Infective Complications There were 21 DNA viral infective episodes (Table ). 80.9% (17/21) of the viral infections occurred within the first year of transplant. CMV syndrome was diagnosed in 12 KTRs (19.6%), of which 8 (66.7%) were in the high‐risk group (D+/R−). Their overall median time of onset of CMV viraemia was 126 days (IQR 36.8–156) after kidney transplant. KTRs from the intermediate risk group (D+/R+) experienced CMV viraemia earlier than the high‐risk group (median 134 vs. 34.5 days), all of whom were not on anti‐viral prophylaxis at diagnosis. BKV viraemia was observed in 23 (37.7%) of our patients, and 8 patients (13.1%) had biopsy‐proven BKV nephropathy. These patients were managed with a reduction in immunosuppressants, IVIG, and some with adjunctive treatment. There were no graft losses in patients with BKVN, and 62.5% (5/8) had documented clearance of the BKV viraemia. Transient EBV viraemia occurred in 14 patients (23.0%), and one patient had EBV disease presenting with a tongue ulcer who improved with adjustments of immunosuppressants. Urinary tract infection (UTI) occurred in 23 patients (37.7%). Among them, 3 had documented graft vesico‐ureteric reflux (VUR). Six patients (9.8%) contracted PJP at a median duration of 1418.5 days (IQR 782.3–2776.3) from the time of transplant. There was one KTR (G6PD deficient) who suffered from PJP at about 3 months post‐transplant despite pentamidine prophylaxis. 3.2.5 Growth and Metabolic Complications Almost half of our cohort (47.5%) had short stature at the last follow‐up. Kidney replacement therapy initiated at an earlier age (7.6 vs. 14.3 years; p = 0.01), kidney transplant performed at an earlier age (11.4 vs. 16.7 years; p ‐0.02) and short stature at the time of the transplant ( p < 0.001) were factors significantly associated with short stature at the last review (Table ). Nine patients received growth hormone (GH) therapy after transplant, 7 in the < 3rd percentile and 2 in the > 3rd percentile group, respectively. Sixteen patients (28.6%) demonstrated an increase in height z score post‐transplant, though 8/16 of their final height remained at < 3% at the last review date. While 10 KTRs (16.4%) were underweight, 14 patients (23.0%) were overweight or obese at the time of the last review (Table ). Five patients (8.2%) developed PTDM at a median of 3.6 years (IQR 2.1–3.9) post‐transplant, and 4 of them required medical treatment to maintain adequate glycaemic control. Patient Characteristics A total of 61 KTRs (57.4% male; median age at kidney transplant 13.0 years, IQR: 8.9–17.8) were included in the study. The median follow‐up time after transplant was 6.4 years (IQR 4.3–9.6) and the median age at evaluation was 21.9 years (IQR 16.5–26.1). Three patients were lost to follow up due to emigration. Fifty‐five (90.2%) received deceased donor transplant (DDT) whereas six (9.8%) had living‐related transplant (LRT). The leading cause of kidney failure in this cohort was congenital anomalies of kidney and urinary tract (CAKUT) (34.4%), followed by glomerular causes (21.3%). Hereditary/familial nephropathies accounted for 13.1% of the KTRs (Table ). Peritoneal dialysis (PD) was the major modality of kidney replacement therapy (KRT) prior to kidney transplant (65.6%). The median waiting time for DDT and LRT was 2.4 years (IQR 1.0–3.9) and 0.5 years (IQR 0–4.2), respectively. More than two‐thirds of our DDT recipients had > 3 HLA mismatches, while 20% had six‐antigen mismatches. The median donor age for DDT and LRT was similar at 40 (IQR 21–48.5) and 46.5 (IQR 43.75–47) respectively. Clinical Outcomes 3.2.1 Patient Survival Four patients died during the study period of 478.4 patient‐years, corresponding to an estimated crude mortality rate of 8.4 per 1000 patient‐years. The overall patient survival was 100%, 96.4%, and 96.4% at 1‐, 5‐, and 7‐years, respectively. The causes of death were infection ( n = 3) and acute T‐lymphoblastic leukaemia ( n = 1) (Table ). One lost his kidney graft due to medication non‐adherence and succumbed 4.3 years later due to severe peritonitis. The remaining three patients died with a functional graft. 3.2.2 Graft Survival There were eight graft losses during the study period (Table ). Two patients (25%) had EGL within the first month: one due to vascular thrombosis and the other due to recurrent FSGS. Rejection and chronic allograft nephropathy accounted for the rest of the graft losses. The median time‐to‐graft loss was 5.8 years (IQR 0.75–8.2). The overall death‐censored graft survival was 95.1%, 95.1%, and 89.9% at 1‐, 5‐, and 7‐years, respectively. Graft survival rate for DDT at 1, 5, and 7 years was 94.5%, 94.5%, and 89%, whereas the graft survival rate for LRT was 100% at 1, 5, and 7 years (Table ). Upon Kaplan–Meier analyses, donor age ≥ 35 years and the development of circulating DSA with ABMR were associated with poor graft survival (log‐rank test p < 0.05 for all; Figure ). Poor drug compliance was also marginally significantly associated with worse graft survival ( p = 0.056; Figure ). The recipients' gender, age, source of kidney (LRT vs. DDT), CMV viraemia or syndrome, BK viraemia and BK nephropathy were not associated with graft survival in this cohort (all p s > 0.10). Among the 53 patients with a functioning graft at last follow‐up, the overall mean rate of eGFR decline during the study period was 0.5 mL/min/1.73 m 2 per year. As for the 40 patients (65.6%) who had a functioning graft at 5 years, 15% had an eGFR < 30 mL/min/1.73 m 2 . 3.2.3 Short‐ and Long‐Term Kidney Allograft Outcomes Seven patients (11.5%) had delayed graft function (DGF) after kidney transplant. A total of 33 rejection episodes occurred in 19 KTRs (31.1%) (Table ). 52.6% (10/19) of the first rejections occurred within the first year of transplant, all of which (10/10) were T‐cell mediated rejection diagnosed at a median of 117.5 days (IQR, 73.5–176.3) after kidney transplant. 21.2% (7/33) rejection episodes were ABMR with a median onset of 750 days (IQR, 458–1252.5). Among those with a functioning graft at the last review, KTRs with any episodes of rejection had a lower median eGFR (40.8 mL/min/1.73 m 2 , IQR 22.8–49.6) compared to those who did not experience any rejection episodes (68.0 mL/min/1.73 m 2 , IQR 51.5–77.5). The proportion of KTRs with medication non‐adherence was significantly higher in those with rejection compared with patients who did not develop a rejection (47.3% vs. 7.1%, RR 3.68, p < 0.001). Histology features of calcineurin‐inhibitor toxicity were also detected in 12 KTRs (19.7%). 3.2.4 Infective Complications There were 21 DNA viral infective episodes (Table ). 80.9% (17/21) of the viral infections occurred within the first year of transplant. CMV syndrome was diagnosed in 12 KTRs (19.6%), of which 8 (66.7%) were in the high‐risk group (D+/R−). Their overall median time of onset of CMV viraemia was 126 days (IQR 36.8–156) after kidney transplant. KTRs from the intermediate risk group (D+/R+) experienced CMV viraemia earlier than the high‐risk group (median 134 vs. 34.5 days), all of whom were not on anti‐viral prophylaxis at diagnosis. BKV viraemia was observed in 23 (37.7%) of our patients, and 8 patients (13.1%) had biopsy‐proven BKV nephropathy. These patients were managed with a reduction in immunosuppressants, IVIG, and some with adjunctive treatment. There were no graft losses in patients with BKVN, and 62.5% (5/8) had documented clearance of the BKV viraemia. Transient EBV viraemia occurred in 14 patients (23.0%), and one patient had EBV disease presenting with a tongue ulcer who improved with adjustments of immunosuppressants. Urinary tract infection (UTI) occurred in 23 patients (37.7%). Among them, 3 had documented graft vesico‐ureteric reflux (VUR). Six patients (9.8%) contracted PJP at a median duration of 1418.5 days (IQR 782.3–2776.3) from the time of transplant. There was one KTR (G6PD deficient) who suffered from PJP at about 3 months post‐transplant despite pentamidine prophylaxis. 3.2.5 Growth and Metabolic Complications Almost half of our cohort (47.5%) had short stature at the last follow‐up. Kidney replacement therapy initiated at an earlier age (7.6 vs. 14.3 years; p = 0.01), kidney transplant performed at an earlier age (11.4 vs. 16.7 years; p ‐0.02) and short stature at the time of the transplant ( p < 0.001) were factors significantly associated with short stature at the last review (Table ). Nine patients received growth hormone (GH) therapy after transplant, 7 in the < 3rd percentile and 2 in the > 3rd percentile group, respectively. Sixteen patients (28.6%) demonstrated an increase in height z score post‐transplant, though 8/16 of their final height remained at < 3% at the last review date. While 10 KTRs (16.4%) were underweight, 14 patients (23.0%) were overweight or obese at the time of the last review (Table ). Five patients (8.2%) developed PTDM at a median of 3.6 years (IQR 2.1–3.9) post‐transplant, and 4 of them required medical treatment to maintain adequate glycaemic control. Patient Survival Four patients died during the study period of 478.4 patient‐years, corresponding to an estimated crude mortality rate of 8.4 per 1000 patient‐years. The overall patient survival was 100%, 96.4%, and 96.4% at 1‐, 5‐, and 7‐years, respectively. The causes of death were infection ( n = 3) and acute T‐lymphoblastic leukaemia ( n = 1) (Table ). One lost his kidney graft due to medication non‐adherence and succumbed 4.3 years later due to severe peritonitis. The remaining three patients died with a functional graft. Graft Survival There were eight graft losses during the study period (Table ). Two patients (25%) had EGL within the first month: one due to vascular thrombosis and the other due to recurrent FSGS. Rejection and chronic allograft nephropathy accounted for the rest of the graft losses. The median time‐to‐graft loss was 5.8 years (IQR 0.75–8.2). The overall death‐censored graft survival was 95.1%, 95.1%, and 89.9% at 1‐, 5‐, and 7‐years, respectively. Graft survival rate for DDT at 1, 5, and 7 years was 94.5%, 94.5%, and 89%, whereas the graft survival rate for LRT was 100% at 1, 5, and 7 years (Table ). Upon Kaplan–Meier analyses, donor age ≥ 35 years and the development of circulating DSA with ABMR were associated with poor graft survival (log‐rank test p < 0.05 for all; Figure ). Poor drug compliance was also marginally significantly associated with worse graft survival ( p = 0.056; Figure ). The recipients' gender, age, source of kidney (LRT vs. DDT), CMV viraemia or syndrome, BK viraemia and BK nephropathy were not associated with graft survival in this cohort (all p s > 0.10). Among the 53 patients with a functioning graft at last follow‐up, the overall mean rate of eGFR decline during the study period was 0.5 mL/min/1.73 m 2 per year. As for the 40 patients (65.6%) who had a functioning graft at 5 years, 15% had an eGFR < 30 mL/min/1.73 m 2 . Short‐ and Long‐Term Kidney Allograft Outcomes Seven patients (11.5%) had delayed graft function (DGF) after kidney transplant. A total of 33 rejection episodes occurred in 19 KTRs (31.1%) (Table ). 52.6% (10/19) of the first rejections occurred within the first year of transplant, all of which (10/10) were T‐cell mediated rejection diagnosed at a median of 117.5 days (IQR, 73.5–176.3) after kidney transplant. 21.2% (7/33) rejection episodes were ABMR with a median onset of 750 days (IQR, 458–1252.5). Among those with a functioning graft at the last review, KTRs with any episodes of rejection had a lower median eGFR (40.8 mL/min/1.73 m 2 , IQR 22.8–49.6) compared to those who did not experience any rejection episodes (68.0 mL/min/1.73 m 2 , IQR 51.5–77.5). The proportion of KTRs with medication non‐adherence was significantly higher in those with rejection compared with patients who did not develop a rejection (47.3% vs. 7.1%, RR 3.68, p < 0.001). Histology features of calcineurin‐inhibitor toxicity were also detected in 12 KTRs (19.7%). Infective Complications There were 21 DNA viral infective episodes (Table ). 80.9% (17/21) of the viral infections occurred within the first year of transplant. CMV syndrome was diagnosed in 12 KTRs (19.6%), of which 8 (66.7%) were in the high‐risk group (D+/R−). Their overall median time of onset of CMV viraemia was 126 days (IQR 36.8–156) after kidney transplant. KTRs from the intermediate risk group (D+/R+) experienced CMV viraemia earlier than the high‐risk group (median 134 vs. 34.5 days), all of whom were not on anti‐viral prophylaxis at diagnosis. BKV viraemia was observed in 23 (37.7%) of our patients, and 8 patients (13.1%) had biopsy‐proven BKV nephropathy. These patients were managed with a reduction in immunosuppressants, IVIG, and some with adjunctive treatment. There were no graft losses in patients with BKVN, and 62.5% (5/8) had documented clearance of the BKV viraemia. Transient EBV viraemia occurred in 14 patients (23.0%), and one patient had EBV disease presenting with a tongue ulcer who improved with adjustments of immunosuppressants. Urinary tract infection (UTI) occurred in 23 patients (37.7%). Among them, 3 had documented graft vesico‐ureteric reflux (VUR). Six patients (9.8%) contracted PJP at a median duration of 1418.5 days (IQR 782.3–2776.3) from the time of transplant. There was one KTR (G6PD deficient) who suffered from PJP at about 3 months post‐transplant despite pentamidine prophylaxis. Growth and Metabolic Complications Almost half of our cohort (47.5%) had short stature at the last follow‐up. Kidney replacement therapy initiated at an earlier age (7.6 vs. 14.3 years; p = 0.01), kidney transplant performed at an earlier age (11.4 vs. 16.7 years; p ‐0.02) and short stature at the time of the transplant ( p < 0.001) were factors significantly associated with short stature at the last review (Table ). Nine patients received growth hormone (GH) therapy after transplant, 7 in the < 3rd percentile and 2 in the > 3rd percentile group, respectively. Sixteen patients (28.6%) demonstrated an increase in height z score post‐transplant, though 8/16 of their final height remained at < 3% at the last review date. While 10 KTRs (16.4%) were underweight, 14 patients (23.0%) were overweight or obese at the time of the last review (Table ). Five patients (8.2%) developed PTDM at a median of 3.6 years (IQR 2.1–3.9) post‐transplant, and 4 of them required medical treatment to maintain adequate glycaemic control. Discussion The number of paediatric kidney transplants performed in our centre was increased by three‐fold from the previous (1992–2002) to present (2009–2020) era. In 2002, Tse et al. reviewed 20 paediatric kidney transplants in Hong Kong, and the graft survival among deceased kidney transplants were 92.3% and 83.1% at 1 and 3 years . In this latest cohort, we report improved and favourable long‐term patient and graft survival. However, while advances in immunosuppressive strategies improved kidney allograft outcomes, significant complications including infection and malignancy came in parallel and resulted in morbidity and mortality. The overall patient survival was 96.4% at 5‐ and 7‐years, which was comparable to various international studies that documented survival at 95.5%–99% . The mortality rate among KRTs was lower than the paediatric dialysis population in our centre (8.4 vs. 17.3 per 1000 patient years) . Mortalities were observed in 4 patients, including 3 patients with infection and 1 patient with acute T‐cell lymphoblastic leukaemia (T‐ALL). Malignancies (PTM) were increasingly recognised in the paediatric transplant population, with an incidence that ranged from 5.6% to 15.4% worldwide, increasing with the number of years post‐transplant . The Australian and New Zealand Transplant Registry reported the median time of death due to malignancy in their paediatric KTRs to be 19 years . The relative short duration of our study period might not reflect the true incidence of PTM, yet the awareness of PTM should be heightened among paediatric nephrologists, facilitating early detection and treatment. Our overall graft survival of 95.1% at 1‐ and 5‐years was comparable to that reported in North America, the UK, and Singapore , ranging from 92.8% to 98% at 1 year and 77.5% to 94% at 5 years. Three important factors were identified to be associated with improved graft survival in our cohort: absence of DSA and antibody‐mediated rejection; drug compliance; donor age < 35 years old. The development of DSA and ABMR is prevalent in our cohort, probably potentiated by poor HLA matching with deceased donors. In our cohort, more than 3 HLA mismatches were associated with the development of DSA and ABMR (OR 1.15) but it was statistically insignificant ( p = 0.88). HLA mismatch is a crucial predictor of rejection and graft loss despite the use of modern‐era immunosuppressants . About 66% of our DDT pairs had more than 3 mismatches, in contrary to 84% of the patients who received a well‐matched kidney (0 mismatch/0 DR + 0/1 B mismatch) in the United Kingdom . This is partly attributed to the long waiting time and low donation rates in Hong Kong, which is a major challenge. Compared to the US (0.78 years) and Australia (1.01 years) , our median time to DDT was 2.4 years from the time that the child was enlisted. The importance of HLA matching in relation to the development of ABMR cannot be overemphasised, as the risk of allograft loss has been shown to increase with each additional HLA mismatch . Indeed, our incidence of acute rejection within the first year of transplant (16.4%), mostly T‐cell mediated, was similar to that reported in North America (16%) and Oceania (12%) . Yet, an apparently higher rate of rejection of 31% over the whole study period was observed and attributed to a high proportion of late‐onset rejection (16/33) especially ABMR (median onset 750 days), which was a significant predictor for developing graft failure. The findings also underscored the importance of monitoring DSA, even in stable KTR who had received kidney transplantation for a long time. Another important factor for developing DSA and ABMR is non‐adherence. This is an important and potentially modifiable factor to reduce rejections . Non‐adherence was reported to be up to 45.5% in international paediatric KTRs . In our cohort, non‐adherence was documented in 12 patients (19.7%), while 9 of them experienced at least one episode of rejection. The relatively better adherence in our cohort could be partly attributed to the Chinese culture, where parents and caretakers are heavily involved in the care of sick children. On the other hand, these children are also well supported by the dedicated renal nurses assigned to take care of them from the start of dialysis through transplant. The rapport between the nursing staff and the family permitted regular review of adherence, education, and support from the multidisciplinary team. However, we did notice that our KTRs with non‐compliance had an older age at transplant (median 17.3 vs. 12.0), which correlated with previous studies that adolescents who needed to take care of their own medication was at a higher risk of non‐adherence . Regarding donor factors, young, deceased donor kidneys demonstrated survival benefits in various studies over old age donor kidneys due to multiple factors such as the better ability to mount a tissue repair process in case of kidney injuries or rejection . There could also be better functional adaptation of the young donor kidneys to the growth of the paediatric KTRs . In fact, donor age was one of the important considered variables in the cadaveric kidney allocation system in many localities . Due to the scarcity of deceased donors and long waiting times prior to transplants, 21% of our deceased donors were above the age of 50 and was associated with considerably worse graft survival. Our data supported that priority in organ allocation, both in terms of graft quality and waiting time, should be given to paediatric patients. Furthermore, avenues to expand the donor pool, such as paired kidney donation program, ABO/HLA incompatible kidney transplants, could be considered to facilitate transplants in the paediatric population. While transplant offers better survival and quality of life to children with kidney failure, it is also inevitably associated with complications. The CMV seroprevalence rate in the general paediatric population was approximately 50% in Hong Kong and increased with age . With the CMV seroprevalence rate in this present cohort being marginally lower at 39.3%, the incidence of CMV viraemia and CMV syndrome was high at 41.0% and 19.7%, respectively. This was in contrast to that reported in the CERTAIN Registry in Europe , in which the overall rate of CMV syndrome during the first 3 years after transplant was as low as 5.0%, and in other single‐centre studies the rate could range from 0% to 11.4% . In our centre, KTRs were given valganciclovir prophylaxis according to the CMV risk status, but the dosage could be suboptimal as limited by adverse effects like neutropenia. The fact that all our KTRs were not on anti‐CMV prophylaxis at the time of the onset of CMV viraemia supported the use of antiviral prophylaxis in both the intermediate and high‐risk groups in the early post‐transplant period when they are on intense immunosuppressant therapies. Close surveillance of CMV replication should also be implemented, especially upon cessation of antiviral prophylaxis, to detect late‐onset CMV complications. In recent years, BK virus has also been increasingly recognised as a significant opportunistic infection in kidney transplants, which could lead to cystitis, ureteric stenosis as well as BK nephropathy and later‐on graft loss. BK nephropathy has a reported incidence of 1%–6.6% worldwide . In our study, there was a high incidence of biopsy‐proven BK nephropathy at 13.1%, which echoes the recent finding from Shanghai . The mainstay of treatment was reduction of IS. This highlights the clinical conundrum to balance the risks between infection and rejection, and novel biomarkers or assays to personalise immunosuppressive loads in KTRs remains an unmet need. Finally, growth retardation after transplantation was common because of poor graft function, steroid use, and lack of catch‐up growth in adolescent KTRs. In this cohort, nearly half of our patients had short stature, similar to the findings in a European study where 44.9% of the 3492 transplanted patients had a growth deficit . The high incidence rates of growth retardation in our series may be related to multiple factors. Over half of our patients had short stature at the time of the transplant and remained as such at the last review. Unfortunately, recombinant growth hormone therapy had not been a standard treatment post‐transplant until the later part of our study . Another reason was that steroid minimisation might not be possible in the face of poor HLA matching. In addition, with a median age of transplant at 13.0 years, many had already reached puberty, and catch‐up growth was not demonstrated. It is important to optimise children's growth and nutrition during CKD or dialysis in order to facilitate better outcomes for these children. There are several limitations to this study. Due to the retrospective nature of our study, there could be reporting bias. Second, about 21.3% of our patients received transplants at age beyond 18. Notwithstanding, our centre takes care of all paediatric transplant patients in Hong Kong, and all patients received standard management protocols with comprehensive follow‐up data. Hence, our data was still able to provide real‐world evidence on the outcomes of paediatric KTRs who are predominantly Chinese. Conclusion Our data demonstrates substantial improvements in paediatric kidney transplant recipient (KTR) outcomes, including patient and graft survival, since the inception of our transplant program in 1992. Key factors associated with favourable outcomes include young donor age (< 35 years), absence of donor‐specific antibodies (DSA) and antibody‐mediated rejection (ABMR), and good patient adherence to medications. While establishing a successful paediatric transplant program faces inherent challenges, we believe implementing policies that prioritise children for transplantation is essential. In addition to preventing and closely monitoring for infection and rejection, optimising growth potential and metabolic profiles can further enhance the long‐term outlook for our paediatric KTRs. The authors declare no conflicts of interest.
Ultrasound and single‐port laparoscopic‐guided microwave ablation of abdominal wall endometriosis lesions: A single‐center observational study
b1b44157-3494-4bb1-9534-1b1ba1a0f81f
11919721
Surgical Procedures, Operative[mh]
INTRODUCTION Endometriosis is characterized by the growth of tissue similar to that found in endometrial glands and stroma outside of the uterine cavity. Abdominal wall endometriosis (AWE), which has an incidence of 0.03%–5%, , , typically occurs secondary to surgical procedures such as cesarean section, hysterectomy, or salpingostomy or following amniocentesis. The main manifestations of AWE are cyclic abdominal pain and a palpable mass, which affects the quality of life and mental health of patients, while the malignant transformation rate is 1%. Current treatment methods for AWE include medication, surgical resection, and high‐intensity focused ultrasound. The effectiveness of drug therapy for AWE remains unclear, and the long‐term use of these drugs can cause adverse effects. , , Accordingly, surgical excision remains the standard treatment for symptomatic AWE, with recommended surgical margins of at least 1 cm to ensure complete disease resection. Nevertheless, mesh introduced to support the abdominal wall structure increases the risk of infection, , and reoperation may be required in cases of disease recurrence. High‐intensity focused ultrasound (HIFU) ablation is a novel non‐invasive treatment strategy with proven safety and efficacy. However, protecting the surrounding tissue, including the bowel, while achieving complete ablation remains a challenge. In the present study, we used microwave ablation to treat AWE. This technology uses a microwave radiator to convert electromagnetic wave energy into microwave radiation energy, which is absorbed by tissue and converted into heat energy, causing tissue coagulation and necrosis. Microwave ablation is another form of thermal ablation with shorter ablation, operation, and anesthesia times than HIFU. , HIFU ablation destroys the AWE lesion; however, inaccurate positioning of the HIFU ablation wand during treatment may result in skin burns. Accordingly, we aimed to combine ultrasound and single‐port laparoscopic monitoring to not only achieve complete ablation but also protect surrounding tissues from damage. In single‐port laparoscopy, the abdominal wall is separated from important organs such as the intestinal tube, and a sufficient safe distance is maintained. At the same time, the lens is pointed at the abdominal wall, and the color change of the abdominal wall, whether there is bleeding in the abdominal wall, and whether there is adhesion in the abdominal wall during ablation were observed. This real‐time, dual‐monitoring approach leads to fewer surgical complications and is especially suitable for deep endometriosis lesions. MATERIAL AND METHODS 2.1 Study design and population Patients with AWE who underwent ultrasound‐guided microwave ablation at Wuhan Central Hospital between October 2017 and February 2022 were included in this retrospective study. Patients were followed up using a combination of in‐person appointments and telephone calls. The abdominal wall lesion depth was classified as follows: type I (superficial implant/above the muscle fascia), type II (intermediate/at the level of the rectus muscle fascia), and type III (deep position/below the fascia). Patients with type I lesions were included in the surface group, whereas those with type II or III lesions were included in the deep group. The inclusion criteria for undergoing ultrasound‐guided microwave ablation were as follows: (1) women of reproductive age with AWE lesions following uterine surgery, (2) abdominal pain related to the menstrual cycle, and (3) size and location of AWE lesions confirmed by ultrasound. The exclusion criteria were as follows: (1) pregnancy or lactation and (2) skin inflammation or ulceration near the AWE lesion. 2.2 Examination and treatment Ultrasonography was performed using a Philips IU22 color Doppler ultrasound machine with a superficial probe (Philips). Biopsies were obtained using a Bard MC1810 puncture needle (BD). Microwaves were generated using an ECO‐100 cold circulation dual‐source microwave tumor therapeutic apparatus (Nanjing Yigao Microwave System Engineering Co., Ltd.) with water circulation cooling, a frequency of 2450 MHz, output power of 40–80 W, and an AL8/AL3 ablation needle. Laparoscopy was performed using a single port (Kangji Medical). Following admission to the hospital, data on the following baseline patient characteristics were collected: age, body mass index, lesion size, lesion quantity, number of previous cesarean sections, latency (the intermediate time between the last C‐section and the formation of the abdominal mass), relative locations of lesions and scars, symptoms, any other deep lesions, and visual analog scale pain scores. In the absence of contraindications for surgery, ultrasound‐guided puncture biopsy of AWE lesions was performed, followed by microwave ablation. The power level was set at 50 W to achieve complete ablation of endometriosis lesions. The ablation was complete when the ablation of strong echo completely covered the lesion area and margin. The total ablation time was recorded, as were any intraoperative complications. Postoperative pathological examinations were performed to confirm the diagnosis of endometriosis. 2.3 Follow‐up All patients were followed up until August 2023 at 1, 3, and 6 months and at 1, 2, 3, 4, and 5 years postoperatively by telephone or in person. The visual analog scale was used by patients to report pain levels, with 0, 1–3, 4–6, and 7–10 points representing no, mild, moderate, and severe pain, respectively. If the patient could not come to the hospital for follow‐up because of various reasons, we conducted a telephone follow‐up, during which the numerical rating scale (NRS) was used. At follow up, lesion volume (d1 × d2 × d3 × π/6), visual analog scale pain scores, and recurrence were assessed. Symptom relief was defined as the substantial relief of cyclic abdominal pain or absence of a palpable mass. Recurrence was defined as abdominal wall pain returning to preoperative levels or the presence of an abdominal wall mass on physical examination after 3 months of symptom relief. 2.4 Statistical analyses Data are presented as the mean ± standard deviation for normally distributed data or as the median ± quartile for non‐normally distributed data. For the measurement data of multiple samples, one‐way analysis of variance was used for the comparison of normally distributed data, and the least significant difference test was used for pairwise comparison; the Kruskal–Wallis test was used to compare non‐normally distributed data. For the count data of two independent samples and multiple samples, the chi‐squared test or continuity correction analysis was used for comparison. The Fisher exact test was used for four 2 × 2 contingency tables. SPSS software (version 27.0; IBM Corp.) was used to perform statistical analysis, and a p ‐value < 0.05 was considered statistically significant. Study design and population Patients with AWE who underwent ultrasound‐guided microwave ablation at Wuhan Central Hospital between October 2017 and February 2022 were included in this retrospective study. Patients were followed up using a combination of in‐person appointments and telephone calls. The abdominal wall lesion depth was classified as follows: type I (superficial implant/above the muscle fascia), type II (intermediate/at the level of the rectus muscle fascia), and type III (deep position/below the fascia). Patients with type I lesions were included in the surface group, whereas those with type II or III lesions were included in the deep group. The inclusion criteria for undergoing ultrasound‐guided microwave ablation were as follows: (1) women of reproductive age with AWE lesions following uterine surgery, (2) abdominal pain related to the menstrual cycle, and (3) size and location of AWE lesions confirmed by ultrasound. The exclusion criteria were as follows: (1) pregnancy or lactation and (2) skin inflammation or ulceration near the AWE lesion. Examination and treatment Ultrasonography was performed using a Philips IU22 color Doppler ultrasound machine with a superficial probe (Philips). Biopsies were obtained using a Bard MC1810 puncture needle (BD). Microwaves were generated using an ECO‐100 cold circulation dual‐source microwave tumor therapeutic apparatus (Nanjing Yigao Microwave System Engineering Co., Ltd.) with water circulation cooling, a frequency of 2450 MHz, output power of 40–80 W, and an AL8/AL3 ablation needle. Laparoscopy was performed using a single port (Kangji Medical). Following admission to the hospital, data on the following baseline patient characteristics were collected: age, body mass index, lesion size, lesion quantity, number of previous cesarean sections, latency (the intermediate time between the last C‐section and the formation of the abdominal mass), relative locations of lesions and scars, symptoms, any other deep lesions, and visual analog scale pain scores. In the absence of contraindications for surgery, ultrasound‐guided puncture biopsy of AWE lesions was performed, followed by microwave ablation. The power level was set at 50 W to achieve complete ablation of endometriosis lesions. The ablation was complete when the ablation of strong echo completely covered the lesion area and margin. The total ablation time was recorded, as were any intraoperative complications. Postoperative pathological examinations were performed to confirm the diagnosis of endometriosis. Follow‐up All patients were followed up until August 2023 at 1, 3, and 6 months and at 1, 2, 3, 4, and 5 years postoperatively by telephone or in person. The visual analog scale was used by patients to report pain levels, with 0, 1–3, 4–6, and 7–10 points representing no, mild, moderate, and severe pain, respectively. If the patient could not come to the hospital for follow‐up because of various reasons, we conducted a telephone follow‐up, during which the numerical rating scale (NRS) was used. At follow up, lesion volume (d1 × d2 × d3 × π/6), visual analog scale pain scores, and recurrence were assessed. Symptom relief was defined as the substantial relief of cyclic abdominal pain or absence of a palpable mass. Recurrence was defined as abdominal wall pain returning to preoperative levels or the presence of an abdominal wall mass on physical examination after 3 months of symptom relief. Statistical analyses Data are presented as the mean ± standard deviation for normally distributed data or as the median ± quartile for non‐normally distributed data. For the measurement data of multiple samples, one‐way analysis of variance was used for the comparison of normally distributed data, and the least significant difference test was used for pairwise comparison; the Kruskal–Wallis test was used to compare non‐normally distributed data. For the count data of two independent samples and multiple samples, the chi‐squared test or continuity correction analysis was used for comparison. The Fisher exact test was used for four 2 × 2 contingency tables. SPSS software (version 27.0; IBM Corp.) was used to perform statistical analysis, and a p ‐value < 0.05 was considered statistically significant. RESULTS 3.1 Lesion location Lesions were classified as type I (superficial) or type II or type III (deep) according to their depth, which was determined using ultrasonography combined with magnetic resonance imaging. Representative images of superficial and deep lesions are shown in Figure . 3.2 Baseline patient characteristics according to lesion depth Of the 30 patients with AWE and complete follow‐up data included in this study, 18 (60%, 18/30) had type I, 10 (33.3%, 10/30) had type II, and two (6.7%, 2/30) had type III lesions. The superficial and deep groups included 18 and 12 patients, respectively. There were no significant differences between the superficial and deep groups in terms of age, body mass index, lesion size, previous cesarean sections, latency, relative locations of the lesions and cesarean scars, preoperative presence of mass and pain, or duration of pain (all, p > 0.05). Almost all patients (96.7%, 29/30) experienced preoperative cyclic abdominal pain, including 24 with menstrual pain, one with ovulatory pain, and four with pain around the time of menstruation. All 30 patients had a history of cesarean section. One patient (3.3%, 1/30) had undergone hysteroscopic cervical polypectomy, two patients (6.67%, 2/30) had undergone laparoscopic myomectomy, and one patient (3.3%, 1/30) had undergone two laparoscopic myomectomies (Table ). Of the 30 cases included this study, only one patient developed abdominal wall endometriosis after cesarean section and experienced recurrence after surgical removal of the heterotopic lesion. This patient experienced postoperative recurrence and underwent two abdominal wall operations, cesarean section, and abdominal wall adenomyoma extraction. None of the other patients (29/30) received any other treatment previously. 3.3 Imaging changes and intraoperative complications during ablation Figure shows changes in ultrasonograms captured during ablation, with the low echo of the lesion slowly becoming obscured by the high echo of the thermal radiation area. The single‐port laparoscopy used to monitor the abdominal wall (Figure ) revealed few intraoperative complications during ablation. One patient experienced burning of the abdominal wall (Figure ), causing the ablation needle to be immediately withdrawn. Subsequently, edema and abdominal wall thickening were observed in this patient (Figure ). No other intraoperative complications, including skin pain, skin burning, bleeding, infection, or poor wound healing, were observed. 3.4 Follow‐up Compared with the preoperative lesion volume (4.0 ± 3.6 cm 3 ), the lesion volume at 1 year postoperatively was significantly lower (1.6 ± 1.3 cm 3 ; p < 0.05). No further significant differences were observed between the other time points (Table ). Compared with the preoperative pain score (6.4 ± 1.1 points), the postoperative pain scores were significantly lower at all time points ( p < 0.01) (Table ). Three patients (10%, 3/30) experienced disease recurrence in the first postoperative year, one patient experienced recurrence in the second postoperative year (13.3%, 4/30), and one patient experienced recurrence in the third postoperative year (16.7%, 5/30); no further cases of recurrence were observed in the fourth and fifth postoperative years. Accordingly, the total recurrence rate was 16.7% (5/30). Two of the patients who experienced recurrence had a history of pelvic surgery: one had undergone laparoscopic myomectomy, and the other had undergone two laparoscopic myomectomies. Table . Lesion location Lesions were classified as type I (superficial) or type II or type III (deep) according to their depth, which was determined using ultrasonography combined with magnetic resonance imaging. Representative images of superficial and deep lesions are shown in Figure . Baseline patient characteristics according to lesion depth Of the 30 patients with AWE and complete follow‐up data included in this study, 18 (60%, 18/30) had type I, 10 (33.3%, 10/30) had type II, and two (6.7%, 2/30) had type III lesions. The superficial and deep groups included 18 and 12 patients, respectively. There were no significant differences between the superficial and deep groups in terms of age, body mass index, lesion size, previous cesarean sections, latency, relative locations of the lesions and cesarean scars, preoperative presence of mass and pain, or duration of pain (all, p > 0.05). Almost all patients (96.7%, 29/30) experienced preoperative cyclic abdominal pain, including 24 with menstrual pain, one with ovulatory pain, and four with pain around the time of menstruation. All 30 patients had a history of cesarean section. One patient (3.3%, 1/30) had undergone hysteroscopic cervical polypectomy, two patients (6.67%, 2/30) had undergone laparoscopic myomectomy, and one patient (3.3%, 1/30) had undergone two laparoscopic myomectomies (Table ). Of the 30 cases included this study, only one patient developed abdominal wall endometriosis after cesarean section and experienced recurrence after surgical removal of the heterotopic lesion. This patient experienced postoperative recurrence and underwent two abdominal wall operations, cesarean section, and abdominal wall adenomyoma extraction. None of the other patients (29/30) received any other treatment previously. Imaging changes and intraoperative complications during ablation Figure shows changes in ultrasonograms captured during ablation, with the low echo of the lesion slowly becoming obscured by the high echo of the thermal radiation area. The single‐port laparoscopy used to monitor the abdominal wall (Figure ) revealed few intraoperative complications during ablation. One patient experienced burning of the abdominal wall (Figure ), causing the ablation needle to be immediately withdrawn. Subsequently, edema and abdominal wall thickening were observed in this patient (Figure ). No other intraoperative complications, including skin pain, skin burning, bleeding, infection, or poor wound healing, were observed. Follow‐up Compared with the preoperative lesion volume (4.0 ± 3.6 cm 3 ), the lesion volume at 1 year postoperatively was significantly lower (1.6 ± 1.3 cm 3 ; p < 0.05). No further significant differences were observed between the other time points (Table ). Compared with the preoperative pain score (6.4 ± 1.1 points), the postoperative pain scores were significantly lower at all time points ( p < 0.01) (Table ). Three patients (10%, 3/30) experienced disease recurrence in the first postoperative year, one patient experienced recurrence in the second postoperative year (13.3%, 4/30), and one patient experienced recurrence in the third postoperative year (16.7%, 5/30); no further cases of recurrence were observed in the fourth and fifth postoperative years. Accordingly, the total recurrence rate was 16.7% (5/30). Two of the patients who experienced recurrence had a history of pelvic surgery: one had undergone laparoscopic myomectomy, and the other had undergone two laparoscopic myomectomies. Table . DISCUSSION Symptomatic AWE can result in long‐term adverse effects on the quality of life and work productivity and can result in massive increases in healthcare costs. Therefore, it is crucial to diagnose and treat it at an early stage. At present, the therapeutic methods for AWE include medical management, surgical excision, and minimally invasive techniques. The success rate of medical therapy has been reported to be low, and a temporary alleviation of symptoms following the use of oral contraceptives, progestogens, danazol, progesterone, and aromatase inhibitors is often followed by recurrence after cessation of the drug. , Traditionally, surgical excision has been the primary treatment for AWE. To prevent recurrence, appropriately expanding the scope of surgical resection, such as by a margin of 1 cm, is considered adequate. However, expanding the scope of surgical resection for AWE patients with involvement of the abdominal wall fascia and muscle may lead to complex repairs, including the need for flaps and mesh by Cocco et al. , The reduction in lesion volume and pain after microwave ablation were consistent with the conclusions of Liu et al.'s study, which was a retrospective study of nine women with pathologically proven endometriosis of the abdominal wall. In this study, after microwave ablation treatment, the volume of all nodules significantly decreased and no major complications were observed. All patients had subjective improvement in their cyclical abdominal wall pain. The most common site of extraperitoneal endometrial tissue growth is the abdominal wall outside the site of previous pelvic surgeries. Owing to the estrogen and progesterone dependence of AWE, edema or even hemorrhage is thought to occur during cyclic changes in estrogen and progesterone levels, as in the normal endometrium; outside the uterine cavity, this induces fibrosis or adhesion of surrounding tissues and eventually leads to the formation of local nodular lesions. Therefore, patients with AWE experience estrogen and progesterone cycle‐related pain, which seriously affects their health and quality of life. Malignant transformation has been reported in approximately 1% of endometriosis cases. Microwave ablation technology has previously been applied to liver tumors, thyroid tumors, endometriosis, and AWE. , , , , The present study's results demonstrated that the lesion volume decreased significantly after 1 postoperative year. Prior to this time point, tissue edema may have contributed to lesion volume. Wang et al. reported that HIFU ablation of AWE resulted in a significantly lower lesion volume at 6 months postoperatively compared with preoperative values, as determined by contrast‐enhanced ultrasonography. The difference in volume assessment methods may explain the earlier lesion volume decrease observed in this study compared to the present study. However, in the present study, visual analog scale scores showed a significant decrease in pain levels at all postoperative time points compared with preoperative levels, showing that the treatment had achieved its aim; the results are consistent with those of studies by Wang et al. and Yang. , Recurrence manifested in 16.7% of patients. Compared with other treatments, the overall success of medical management is temporary, with symptom recurrence following drug discontinuation reported by Allen et al. , The recurrence risk ranges from 4.3% to 11%, but the risk can be minimized by ensuring adequate margins during surgery. , We hypothesize that the recurrence rate will be increased by laparoscopic myomectomy. When the uterine fibroids are located in or near the endometrium, the fibroids are removed laparoscopically. In this case, endometriosis may be placed intramuscularly or ectopic into the subcutaneous soft tissue layer, and this may cause endometriosis of the abdominal wall. As the number of cases in our center continues to increase, we will further study the issue of relapse factors. We also assessed lesions according to depth, with different depths requiring different treatment methods; superficial lesions can be treated with ultrasound‐guided percutaneous ablation, but deep lesions require a single‐port laparoscopy to allow observation. Although this distinction has been made previously, it was not in the context of microwave ablation. Single‐port laparoscopy is performed through the patient's umbilicus after the establishment of the artificial pneumoperitoneum, expanding the subcutaneous space and making the operating space clearer, while also facilitating the monitoring and heating of deep lesions during ablation. Therefore, single‐port laparoscopy and ultrasound‐guided ablation of AWE lesions is beneficial if the lesion is deep—located in or below the rectus abdominis—and invades the posterior and posterior sheath of the rectus abdominis. Accordingly, the surgical procedure requires laparoscopy to optimally observe color changes. Alternatively, if the patient is obese and the lesion is close to the abdominal wall and surface, single‐port laparoscopy can help avoid thermal damage to the viscera under artificial pneumoperitoneum, and the abdominal and pelvic wall can be observed under laparoscopy for the presence of visceral and peritoneal adhesion, avoiding damage to important blood vessels, nerves, and visceral organs. Ablation can be stopped immediately in cases of abdominal wall thermal damage and discoloration to protect the abdominal wall and surrounding tissues from further damage. Several certain limitations should be considered in our study. It was a single‐center retrospective observational study with a small number of cases. Additionally, according to the location of the lesions, the patients were divided into the surface or deep group, and the number of patients in each group was small. Thus, a comparative study between the two groups was not possible. The promising findings reported herein warrant confirmation in further large‐cohort prospective studies. In subsequent studies, we plan to conduct further research on ablation precision treatment to reduce unnecessary damage to surrounding tissue, while also combining a variety of treatments to reduce recurrence. CONCLUSION Ultrasound and single‐port laparoscopy‐guided ablation of AWE was associated with significant decreases in lesion volume and pain, with few adverse effects recorded. The recurrence rate in this study was 16.7%, indicating its potential as an important clinical treatment method. With the continuous application and development of ultrasound‐guided minimally invasive technology, clinicians are seeking to minimize surgical trauma while retaining satisfactory treatment effects. All authors contributed to the study design, data collection, writing of the manuscript, data cleaning, statistical analysis, research guidance, and manuscript editing. Shunshi Yang: Conceptualization, methodology, project administration, resources, software, supervision, validation, and writing—review and editing. Jueying Li: Conceptualization, data curation, formal analysis, methodology, supervision, validation, and writing—original draft. Jingjing Li: Data curation, formal analysis, funding acquisition, project administration, and writing—original draft. Xiaoyu Zhao: Investigation, methodology, software, and supervision. Mengying Li: Data curation and project administration. Yi Zhang: Data curation, project administration, and supervision. Xiong Li: Conceptualization, resources, and writing—review and editing. Ying Chen: Investigation, resources, and validation. This work was supported by a grant from the Wuhan Municipal Health Commission of Hubei Province, China (grant number: WX21C36). The authors declare no conflicts of interest. This study was approved by the Ethics Committee of the Wuhan Central Hospital on October 8, 2015 (Wuhan Health and Family Planning Commission Application [2016] 17); all patients provided informed consent. Table S1.
Impact of the transition to digital pathology in a clinical setting on histopathologists in training: experiences and perceived challenges within a UK training region
f63f99d5-5434-413a-b68a-983d7332e23e
10511979
Pathology[mh]
While there is an understanding of the need for guidance for histopathologists transitioning to reporting clinical cases with digital pathology, guidance specifically to address the needs of histopathologists in training is not well established. Assessment of the experiences of histopathologists in training within a region with access to digital pathology provides new understanding of their perceptions of the benefits and challenges of adoption of digital pathology and their specific training needs. The survey results highlight the need to consider histopathologists in training during the transition to digital pathology, and to ensure a means to develop their confidence in its utility within the diagnostic setting. We outline key considerations for training in digital pathology and the potential support needed for those working within training regions with variable access to digital pathology. Adoption of digital pathology (DP) within clinical practice is in the early stages in the UK. There are currently few centres with a fully digital set-up for diagnostic reporting; however, other centres are beginning the transition. The benefits (perceived and actual) and the challenges for diagnostic pathology practice are well documented; however, there are limited data about the acceptance and the impact of DP on histopathology training. As one of the first UK centres to undertake the transition to diagnostic DP, we previously explored the considerations necessary for histopathologists in training (hereafter ‘trainees’) in relation to this, and what measures might be beneficial in support of their specific training needs, proposing a programme of training with a theoretical and practical introduction ahead of DP transformation. As the transition to DP continues to evolve nationally and internationally, there is now comprehensive literature on DP validation for diagnostic reporting and increasing numbers of experiential commentaries on DP implementation, with national and international groups being established to support the pathology community in the transition, and onward to the development and use of artificial intelligence (AI). However, similar support for trainees is lacking. Uniquely, we have been able to reflect on the trainees’ experience of the introduction of diagnostic DP to our region. Through an online survey, we have explored their opinions regarding the transition to diagnostic digital reporting, and the wider potential benefits of DP to training and education, and any perceived challenges that access to DP introduces. A survey was circulated via the online SurveyMonkey survey tool ( www.surveymonkey.com ) to 24 trainees in their first year of training and above, based in four regional hospitals: the index tertiary referral centre which has been fully digitised since summer 2020 and three district general hospitals, one of which is currently partially digitised and the others without a digital diagnostic service. The survey results would be used to inform training in DP within the region, and through better understanding of the reported perceptions and opinions, potentially improve the success of both DP training and utilisation. The survey comprised 33 individual questions , about demographics, personal experience of DP within a clinical diagnostic setting, wider experience of DP for education and in other settings, current level of access to diagnostic DP, perceived training needs in relation to the transition to DP, impact of DP on histopathology training and examinations, and for those without current DP experience in a clinical setting there were questions on perceived readiness for transition, and areas of perceived potential benefit and challenge in the use of DP in the diagnostic setting. There were also questions on attitudes and opinions to both DP and AI in histopathology. The trainees were asked to consider their responses in relation to DP experience in the absence of the impact of COVID-19, as far as possible. 10.1136/jcp-2022-208416.supp1 Supplementary data Nineteen trainees responded to the survey (79% response rate) and all answered in full. Respondents were at varying stages in their training, with 11 of 19 in the first 3 years of training, and 8 of 19 in their 4th year or above. In relation to examination status (Fellowship of the Royal College of Pathologists, FRCPath), 5 of 19 were post-FRCPath part 2 (all five with experience of diagnostic DP), with the remainder pre-FRCPath part 2 (9 of 19 pre-FRCPath part 1). General level of experience of DP Eleven of 19 trainees reported that they had worked in a centre with access to DP for diagnosis, although only 10 of 19 had personal experience in using DP for diagnosis, of whom 9 of 19 had >6 months of DP experience. One respondent with DP experience for diagnosis in one hospital (>6 months) had since moved to a new post in a hospital without current access to DP. A minority (3 of 19) had exposure to DP during undergraduate medical education in histology or histopathology. All respondents had experience of DP in at least one context , and the majority (17 of 19) had used online digital slide resources for their own education or in the context of a course, and over half (11 of 19) had used DP slides for educating others. Ninety per cent (9 of 10)using DP for diagnosis were aware of the Royal College of Pathologists (RCPath) guidance on the implementation of digital pathology; however, only 33% (3 of 9) working outside the department using DP were aware of this guidance. Training to use DP in the diagnostic setting Seven of 19 reported having not received any specific training in the use of DP for diagnosis, including 20% of those currently using DP for diagnosis. Almost half had attended a training day on DP (8 of 19), and 3 of 19 had received training from the DP vendor (including 30% of those currently using DP). Validation sets collated in-house for consultant pathologists undergoing DP validation had been reviewed by 2 of 19 respondents, both with experience of diagnostic reporting on DP. No respondents had undertaken a formal ‘validation’ process. Experience with DP in the diagnostic setting All trainees who have been based in the centre with routine diagnostic DP had experience in reporting surgical pathology digitally (10 of 19), and in the use of DP for sharing diagnostic cases flagged by colleagues for educational purposes. The extent to which these trainees currently use DP for diagnostic work is variable; 4 of 10 (3 of whom are post-FRCPath part 2) report using DP only for specific aspects of diagnostic reporting, such as assessment of measurements (tumour, margins), and prefer to report on glass slides (GS). The remaining 6 of 10 review both digital slides and GS for each case, with 4 of 6 reviewing all GS for a case (as opposed to review of selected GS). None report solely on DP. In terms of confidence in DP reporting, 6 of 10 do not feel confident but believe that they will do with additional experience. Of those who were more confident, the time taken to gain confidence was very variable (<1–>6 months). Specific considerations related to diagnostic DP experience and perceived differences between DP and GS reporting are presented in . An essential part of training is the opportunity to review diagnostic cases with an experienced colleague, typically the consultant pathologist responsible for sign-out of the diagnostic report. DP introduces additional considerations around this, which have been impacted further by the COVID-19 pandemic and the requirement for social distancing and remote working. In our institution, there has been an overlap between the introduction of DP and the safe working requirements of the pandemic which are difficult to disentangle. However, pandemic aside, trainees reporting diagnostic cases on DP have various options for reviewing cases with a consultant colleague: DP together in real time in the same location. DP together in real time but remotely via videoconferencing. Review of cases separately with email communication about the findings. Trainee review of case on DP but then on GS in real time with the consultant. Respondents did not express a preference as to which of these case sharing options was most beneficial to training. Six of 10 agreed that it made no difference whether the case was co-reviewed on GS or digitally, although case sharing limited to email communication was least favoured. Trainees were asked to comment (free text) on areas they found to be easier with DP and those they felt to be more challenging . Training to report diagnostic cases on a digital platform Those trainees using DP for diagnostic reporting were asked about training in relation to this. While they generally felt supported in the transition to DP, 4 of 10 felt that they had not received sufficient training to report digitally (with 4 of 10 neutral on this point). Only 1 of 10 felt that specific training was not necessary in relation to making the transition. Four of 10 were concerned about maintaining competence in reporting on GS. The following specific areas were agreed as being of interest for future training, in order of popularity: potential challenges/pitfalls in digital diagnosis, data governance and ethical considerations, use of digital platform (functionality), integration of DP into the laboratory workflow and information technology considerations. General considerations around the wider impact of access to diagnostic DP on training experience For those within the DP-enabled centre, all agreed it had been a positive experience, facilitating generally improved case sharing and access to cases (see ). None felt that the introduction of DP had negatively impacted their training. Free-text comments included: ‘Reduction of pressure on trainees who can keep slides but consultant can review digital images and order extra work in parallel.’ ‘…overall the introduction of DP is positive in that it makes a huge number of cases accessible and reviewable to trainees, however DP and the option for more remote working has meant that I feel I had less contact with consultants and time going through cases.’ In relation to readiness for the FRCPath examinations, it is noteworthy that no respondents felt that using DP for diagnosis would impact examination preparation negatively, although 1 of 19 felt that the largely digital nature of cases shared for teaching sessions in region might have a negative impact. Overall, DP was seen to facilitate access to a greater range and numbers of cases than otherwise might be expected, with 12 of 19 and 13 of 19 agreeing this to be the case, respectively (the remainder being neutral or unsure). Given the nature of the FRCPath part 2 examination, a case-based practical assessment set on GS, we focused specifically on any perceived potential impact of DP on the examination preparation. Five of 19 have undertaken the FRCPath part 2 since DP was introduced within region, and the opinions of these trainees were split: One of five felt that reporting and reviewing cases regularly on DP (vs GS) has an impact (negative) on readiness for the examination. Two of five (three of five neutral) preferred to report on GS during examination preparation. Mixed view as to whether it would have been more helpful to review cases on GS with a consultant during this time. While there was no preference as to whether teaching cases were more helpfully seen on GS versus DP in the examination preparation period, three of five and two of five agreed that practice examinations and FRCPath part 2 courses (respectively) were better on GS versus DP (the remainder being neutral). Subjectively, the respondents regarded access to DP (including remotely) as having made examination preparation easier overall. Free-text comments in relation to DP and the examinations included: The issue for FRCPath examinations is there is a tendency to neglect exam practice cases on glass especially for cytology. Whilst it is better to see cases on glass slides in preparation for the exam, being able to have regular black box sessions and access to essentially a ‘library’ of cases digitally is much more time-efficient. This benefit completely outweighs the cost of seeing ‘less’ glass to a degree. ….I would not be happy to sit a digital Part 2 examination without formal training and significant clinical experience with DP. Perceptions of trainees in non-DP-enabled training centres within region While this had been explored to an extent in our previous work, the current survey provides a novel insight specifically as to how variable access within the region may itself impact on perceptions in contrast to investigation of perceptions of a DP-naïve cohort. The results are presented in . It is evident from these results that this cohort of trainees feels that access to DP anywhere within region has a positive impact on their training overall (including five of six in non-digital centres), although there are clear concerns about the variable access to diagnostic DP within region and to their own transition to DP reporting. Notably, one respondent had moved from a centre with access to DP and experience with digital reporting to one without access. This doctor suggested that the transition could be challenging, in view of changes in workflow and access to cases, and the loss of benefits of DP such as ease of measurements and low power assessment, although overall did not perceive negative impact on training. Perception of the impact of availability of DP on training and future job prospects, and on the promise of AI Overall, there was positivity in relation to the impact on training of the introduction of DP into the region, with 84% (16 of 19) agreeing that this has been a positive experience, and 89% (17 of 19) agreeing that it has provided greater training opportunities. Considering impact on their future consultant careers, 95% (18 of 19) agree that the ability to report on both DP and GS will be beneficial, and 74% (14 of 19) agree that personal experience with diagnostic DP during training will impact on future job choices (more likely to apply for jobs with access to DP). Finally, while the survey has not focused on AI, there was good awareness of the potential role for AI in pathology, with almost all respondents having awareness of the potential for AI to aid diagnosis, prognostication and derivation of novel insights into disease. Seventy-four per cent (14 of 19) look forward to the potential of using AI, although around one-third (7 of 19) remain concerned about the potential for DP and AI to replace pathologists. Thirty-seven per cent (7 of 19) have already had involvement in research in the development of AI, and overall, 79% (15 of 19) would like to be involved. Eleven of 19 trainees reported that they had worked in a centre with access to DP for diagnosis, although only 10 of 19 had personal experience in using DP for diagnosis, of whom 9 of 19 had >6 months of DP experience. One respondent with DP experience for diagnosis in one hospital (>6 months) had since moved to a new post in a hospital without current access to DP. A minority (3 of 19) had exposure to DP during undergraduate medical education in histology or histopathology. All respondents had experience of DP in at least one context , and the majority (17 of 19) had used online digital slide resources for their own education or in the context of a course, and over half (11 of 19) had used DP slides for educating others. Ninety per cent (9 of 10)using DP for diagnosis were aware of the Royal College of Pathologists (RCPath) guidance on the implementation of digital pathology; however, only 33% (3 of 9) working outside the department using DP were aware of this guidance. Seven of 19 reported having not received any specific training in the use of DP for diagnosis, including 20% of those currently using DP for diagnosis. Almost half had attended a training day on DP (8 of 19), and 3 of 19 had received training from the DP vendor (including 30% of those currently using DP). Validation sets collated in-house for consultant pathologists undergoing DP validation had been reviewed by 2 of 19 respondents, both with experience of diagnostic reporting on DP. No respondents had undertaken a formal ‘validation’ process. All trainees who have been based in the centre with routine diagnostic DP had experience in reporting surgical pathology digitally (10 of 19), and in the use of DP for sharing diagnostic cases flagged by colleagues for educational purposes. The extent to which these trainees currently use DP for diagnostic work is variable; 4 of 10 (3 of whom are post-FRCPath part 2) report using DP only for specific aspects of diagnostic reporting, such as assessment of measurements (tumour, margins), and prefer to report on glass slides (GS). The remaining 6 of 10 review both digital slides and GS for each case, with 4 of 6 reviewing all GS for a case (as opposed to review of selected GS). None report solely on DP. In terms of confidence in DP reporting, 6 of 10 do not feel confident but believe that they will do with additional experience. Of those who were more confident, the time taken to gain confidence was very variable (<1–>6 months). Specific considerations related to diagnostic DP experience and perceived differences between DP and GS reporting are presented in . An essential part of training is the opportunity to review diagnostic cases with an experienced colleague, typically the consultant pathologist responsible for sign-out of the diagnostic report. DP introduces additional considerations around this, which have been impacted further by the COVID-19 pandemic and the requirement for social distancing and remote working. In our institution, there has been an overlap between the introduction of DP and the safe working requirements of the pandemic which are difficult to disentangle. However, pandemic aside, trainees reporting diagnostic cases on DP have various options for reviewing cases with a consultant colleague: DP together in real time in the same location. DP together in real time but remotely via videoconferencing. Review of cases separately with email communication about the findings. Trainee review of case on DP but then on GS in real time with the consultant. Respondents did not express a preference as to which of these case sharing options was most beneficial to training. Six of 10 agreed that it made no difference whether the case was co-reviewed on GS or digitally, although case sharing limited to email communication was least favoured. Trainees were asked to comment (free text) on areas they found to be easier with DP and those they felt to be more challenging . Those trainees using DP for diagnostic reporting were asked about training in relation to this. While they generally felt supported in the transition to DP, 4 of 10 felt that they had not received sufficient training to report digitally (with 4 of 10 neutral on this point). Only 1 of 10 felt that specific training was not necessary in relation to making the transition. Four of 10 were concerned about maintaining competence in reporting on GS. The following specific areas were agreed as being of interest for future training, in order of popularity: potential challenges/pitfalls in digital diagnosis, data governance and ethical considerations, use of digital platform (functionality), integration of DP into the laboratory workflow and information technology considerations. For those within the DP-enabled centre, all agreed it had been a positive experience, facilitating generally improved case sharing and access to cases (see ). None felt that the introduction of DP had negatively impacted their training. Free-text comments included: ‘Reduction of pressure on trainees who can keep slides but consultant can review digital images and order extra work in parallel.’ ‘…overall the introduction of DP is positive in that it makes a huge number of cases accessible and reviewable to trainees, however DP and the option for more remote working has meant that I feel I had less contact with consultants and time going through cases.’ In relation to readiness for the FRCPath examinations, it is noteworthy that no respondents felt that using DP for diagnosis would impact examination preparation negatively, although 1 of 19 felt that the largely digital nature of cases shared for teaching sessions in region might have a negative impact. Overall, DP was seen to facilitate access to a greater range and numbers of cases than otherwise might be expected, with 12 of 19 and 13 of 19 agreeing this to be the case, respectively (the remainder being neutral or unsure). Given the nature of the FRCPath part 2 examination, a case-based practical assessment set on GS, we focused specifically on any perceived potential impact of DP on the examination preparation. Five of 19 have undertaken the FRCPath part 2 since DP was introduced within region, and the opinions of these trainees were split: One of five felt that reporting and reviewing cases regularly on DP (vs GS) has an impact (negative) on readiness for the examination. Two of five (three of five neutral) preferred to report on GS during examination preparation. Mixed view as to whether it would have been more helpful to review cases on GS with a consultant during this time. While there was no preference as to whether teaching cases were more helpfully seen on GS versus DP in the examination preparation period, three of five and two of five agreed that practice examinations and FRCPath part 2 courses (respectively) were better on GS versus DP (the remainder being neutral). Subjectively, the respondents regarded access to DP (including remotely) as having made examination preparation easier overall. Free-text comments in relation to DP and the examinations included: The issue for FRCPath examinations is there is a tendency to neglect exam practice cases on glass especially for cytology. Whilst it is better to see cases on glass slides in preparation for the exam, being able to have regular black box sessions and access to essentially a ‘library’ of cases digitally is much more time-efficient. This benefit completely outweighs the cost of seeing ‘less’ glass to a degree. ….I would not be happy to sit a digital Part 2 examination without formal training and significant clinical experience with DP. While this had been explored to an extent in our previous work, the current survey provides a novel insight specifically as to how variable access within the region may itself impact on perceptions in contrast to investigation of perceptions of a DP-naïve cohort. The results are presented in . It is evident from these results that this cohort of trainees feels that access to DP anywhere within region has a positive impact on their training overall (including five of six in non-digital centres), although there are clear concerns about the variable access to diagnostic DP within region and to their own transition to DP reporting. Notably, one respondent had moved from a centre with access to DP and experience with digital reporting to one without access. This doctor suggested that the transition could be challenging, in view of changes in workflow and access to cases, and the loss of benefits of DP such as ease of measurements and low power assessment, although overall did not perceive negative impact on training. Overall, there was positivity in relation to the impact on training of the introduction of DP into the region, with 84% (16 of 19) agreeing that this has been a positive experience, and 89% (17 of 19) agreeing that it has provided greater training opportunities. Considering impact on their future consultant careers, 95% (18 of 19) agree that the ability to report on both DP and GS will be beneficial, and 74% (14 of 19) agree that personal experience with diagnostic DP during training will impact on future job choices (more likely to apply for jobs with access to DP). Finally, while the survey has not focused on AI, there was good awareness of the potential role for AI in pathology, with almost all respondents having awareness of the potential for AI to aid diagnosis, prognostication and derivation of novel insights into disease. Seventy-four per cent (14 of 19) look forward to the potential of using AI, although around one-third (7 of 19) remain concerned about the potential for DP and AI to replace pathologists. Thirty-seven per cent (7 of 19) have already had involvement in research in the development of AI, and overall, 79% (15 of 19) would like to be involved. The literature on the transition to the use of DP for diagnostics is extensive; however, training and guidance specifically related to histopathologists in training are rarely mentioned, and the opinions and perceptions of those in training to the transition to DP are largely unknown. In contrast to the governance structures in place for consultant histopathologists in terms of developing and ensuring competence in diagnostic reporting on whole slide images, including RCPath guidance on a formal validation process to reporting digitally, and guidance from the College of American Pathologists, there is currently no such equivalent for trainees nor recommendation within these documents as to how they might be considered during DP implementation, although awareness exists that such guidance is warranted. The movement of doctors between centres during training with variation in availability of DP requires additional consideration around the maintenance of skill and confidence (and competence) in reporting GS, an observation relevant not just in the UK. Furthermore, the FRCPath part 2 examination is on a GS rather than digital format which has the potential to introduce anxiety among those preparing for the examination who may feel ‘out of practice’ with making diagnoses on GS, especially given that many organised courses now use digitised slides, in part driven by the need for remote delivery of teaching during the COVID-19 pandemic. The results of our survey have been enlightening in that while the trainees overall perceive a large amount of educational benefit in relation to access to DP, particularly during the COVID-19 pandemic, as highlighted by others, there is evidence of variability in the uptake of DP in the diagnostic setting, and a perceived lack of confidence in its routine use. The reason for the perceived reluctance to fully use DP for diagnostic reporting is not entirely clear from the survey. The respondents showed good awareness of the potential pitfalls in digital diagnosis, in line with those previously reported within the literature, and generally felt that they had sufficient training to use the digital platform. But levels of confidence in reporting digitally were not uniformly high and only 3 of 10 trainees with diagnostic DP experience were ‘not concerned’ about the accuracy of reporting on DP versus GS. While those using DP diagnostically were aware of the RCPath guidance on validation, a minority had reviewed any of the available validation sets, which is a core component of the governance structures recommended at consultant level during the transition to DP. It could be inferred that the lack of a ‘validation-type’ exercise may have impacted on confidence in digital reporting and a willingness to use the system routinely, confounded by a degree of concern (shown by 4 of 10) about maintaining competence in reporting on GS. Indeed, personal communication with trainees subsequent to the survey indicates enthusiasm for a ‘validation’ process to aide confidence in their diagnostic interpretation on DP, and suggests that the motivation to transition to DP is impacted by wider issues around the need to maintain skills in GS reporting both during training and for future job opportunities at consultant level. It was also highlighted that making the transition to DP during the later stages of training when focus is necessarily on developing confidence in independent reporting is potentially an additional challenge. Consideration of the stage of training at which DP is accessed for diagnostic reporting is therefore also important, although clearly this will be less of an issue when DP is more widely available. Significantly, the skill to report on GS is required for the FRCPath part 2 examination; however, within our cohort, it appears that transitioning to DP ahead of the examination was not a significant concern as the perceived benefits of rapid access to larger numbers of cases digitally outweighed the reduction in reporting on GS, although there were mixed opinions as to whether having formal teaching and practice examinations during this period in a GS format might be more beneficial. It should be noted for context locally that while DP reporting has been adopted widely within the digitally enabled department, the GS are routinely sent out to the pathologists, and therefore any impact from more limited access to GS for diagnostic cases may not be reflected fully in the current survey. Maintenance of competence in GS reporting also remains relevant due to the rotational nature of the training programme, given that some centres are currently without access to DP, a situation not unique to our region; and this must be a consideration for training going forward. We would advocate that therefore trainees must be supported to ensure that they feel confident of the return to GS reporting and given the time to make the adjustments. It was noteworthy that those without current diagnostic DP experience conveyed considerable anxiety about the transition, although it is evident that they have significant DP experience within other contexts such as educational courses. This is important to address, as while positive emphasis on the relevance of any DP experience in developing confidence about digital diagnostic reporting may be reassuring, it is noted that 60% (6 of 10) of our cohort accessing DP regarded the clinical standard platform as superior for diagnostic purposes to platforms used purely in the educational setting. This may be a significant consideration if it translates that suboptimal experience with DP within an educational setting negatively impacts confidence in DP. Going forward, we would advocate a multilayered approach to training within a region with access to diagnostic DP, recognising the overlap with the needs of consultant colleagues but also the specific considerations for training which we have outlined. General education early on in relation to the utility of DP, on practical issues related to integration of DP into the laboratory, and governance-related matters including the validation process, will provide inclusivity across a region with variable access, and can take the form of a group session effectively establishing a ‘community of practice’. Training on technical considerations would be beneficial at the outset to ensure understanding of the functionality of DP and what the limitations may be. Our trainees expressed interest in the ethical and legal considerations in relation to DP, and these needs should be addressed more widely as recent evidence has revealed a general lack of understanding among histopathologists of these aspects. There should be provision of training on the use of the digital platform, with ongoing support for issues arising. Importantly, we would advocate establishment of a DP validation resource generalised across specialties, enabling trainees to develop confidence in DP reporting. Awareness of existing educational resources developed by early adopters of DP, including those developed by the PathLAKE consortium in the UK ( www.pathlake.org ), should also be raised. Finally, we recognise that some trainees would benefit from additional support during the transition, and on the basis of the survey results, we have proposed a mentorship scheme whereby trainees with DP experience offer support to others in transition; a scheme that could be rolled out more widely beyond the region as other centres become digitally enabled, and this is to be explored. To date, the needs of histopathologists in training as they transition to DP have not been evaluated specifically, and while many of these needs overlap with those of consultant pathologists, there remain specific considerations which are particularly relevant within training regions with variable DP access. We have shown that confidence in reporting on the digital platform is a major factor for consideration, and while there is no formal recommendation for trainees to undergo validation to report digitally, this may be of benefit in aiding successful transition. At this stage where DP is not uniformly available, support is also necessary to maintain skills with GS reporting.
Coupling metabolomics and exome sequencing reveals graded effects of rare damaging heterozygous variants on gene function and human traits
549c213c-3d55-4ed5-9972-e3d7c3cd60e1
11735408
Biochemistry[mh]
A complex interplay of thousands of enzymes and transport proteins is involved in maintaining physiological levels of intermediates and end products of metabolism. Disturbances of their function can result in severe diseases, such as those caused by inborn errors of metabolism (IEMs), or predispose to common metabolic diseases such as type 2 diabetes or gout. While the study of rare, early-onset, autosomal recessive IEMs has uncovered many metabolite-related genes, such studies are limited by the very low number of persons homozygous for the causative variants. Conversely, genome-wide association studies (GWASs) in large populations have revealed thousands of common genetic variants associated with altered metabolite levels – , but these variants’ functional effects are often unknown, and their modest effect sizes limit their direct clinical impact. Gene-based aggregation testing of rare, putatively damaging variants in population studies can address this challenge. Previously, such studies have focused almost exclusively on the circulating metabolome – . We have shown recently that GWASs of paired plasma and urine metabolomes do not only reveal many more associations but also enable specific insights into renal metabolite handling . We therefore aimed to perform gene-based testing of the aggregate effect of rare variants on the levels of 1,294 plasma and 1,396 urine metabolites quantified from 4,737 participants in the German Chronic Kidney Disease (GCKD) study with whole-exome sequencing (WES) data to identify metabolism-related genes and to understand whether the underlying rare, almost exclusively heterozygous variants permit inferences complementary to the ones obtained from the study of IEMs. Patients with IEMs typically show severe symptoms that originate from accumulation or depletion of metabolites, while heterozygous carriers of the causative variants often show milder changes of the same or related metabolic phenotypes . We hypothesized that sex-specific analysis of metabolite-associated, X chromosomal genes as well as knowledge-based, computational modeling based on sex-specific organ-resolved whole-body models (WBMs ; ) of human metabolism can inform on whether heterozygous damaging variants capture the metabolic effects of their unobserved homozygous counterparts. WBMs enable the investigation of homozygous gene defects through deterministic in silico knockout modeling. The resulting virtual IEMs reflect observed IEMs – . We further hypothesized that metabolite-associated rare variants identified in the GCKD study would show associations with related human traits and diseases in very large population studies and that the genetic effects would be proportional to their effects on metabolite levels if the implicated metabolites are molecular readouts of disease-relevant processes. The large UK Biobank (UKB) with WES data and extensive health record linkage permits the systematic study of the aggregated and individual effects of rare, damaging, metabolite-associated variants on a wide variety of traits and diseases. Here, we set out to perform gene-based rare variant aggregation testing to discover genes associated with metabolite levels and to characterize their genetic architecture with respect to the identified variants and across plasma and urine. We validate identified genes and variants and the range of their effects through complementary genetic approaches, with a new computational method based on WBMs , and through proof-of-principle experimental studies, and identify traits and diseases for which these metabolites represent molecular readouts. As summarized in Fig. , rare, putatively damaging variants were identified in 16,525 genes based on WES data from 4,737 GCKD study participants (mean age of 60 years, 40% women; Supplementary Table ). Metabolites were determined by nontargeted mass spectrometry and covered a wide variety of superpathways (Metabolon HD4 platform; Supplementary Table ). Exome-wide burden tests for the association between each gene and the levels of each of 1,294 plasma and 1,396 urine metabolites (781 overlapping) were carried out using two complementary ‘masks’ that differed in the selection of qualifying variants (QVs; ) for gene-based aggregation. While the ‘LoF_mis’ mask contained a median of eight QVs per gene predicted to be either high-confidence loss-of-function (LoF) variants or deleterious missense or in-frame nonsynonymous variants, the ‘HI_mis’ mask contained a median of 16 QVs per gene predicted as high-impact consequence (transcript ablation or amplification, splice acceptor or donor, stop-gain, frameshift, start or stop lost) or as deleterious missense variants using additional prediction scores . Both masks assume a LoF mechanism but account for different genetic architectures. Discovery of 192 significant gene–metabolite associations We identified 192 significant gene–metabolite pairs across both plasma ( P value < 5.04 × 10 −9 ) and urine ( P value < 4.46 × 10 −9 ), where 43 associations were detected in both (192 + 43 associations overall; Fig. and Supplementary Table ). These involved 73 unique genes and 179 metabolites, with a comparable number of genes and metabolites identified in plasma and urine. There were 22 and 17 genes with significant associations exclusively in plasma and in urine, respectively. While the majority of associations was detected with both masks, the more inclusive ‘HI_mis’ mask yielded more mask-specific associations than the ‘LoF_mis’ mask (Fig. ). Amino acids and lipids were the dominating pathways among the associated metabolites (Supplementary Fig. ). The higher proportion of implicated lipids in plasma than in urine is consistent with the absence of glomerular filtration of many lipids (Fig. ). Associations detected in both plasma and urine generally affected the levels of the implicated metabolite in the same direction (Fig. ). Sensitivity analyses evaluating additional masks and methods for aggregation testing (LoF only, sequence kernel association test (SKAT) and SKAT- optimal unified test (SKAT-O)) as well as sex-stratified and kidney function-stratified analyses supported the robustness of the main findings ( , Extended Data Figs. – and Supplementary Tables and ). Previous independent studies of associations between sequencing-based rare variants and metabolite levels obtained using comparable technology have focused on plasma and serum , , , . Comparison of the 128 discovered gene–plasma metabolite associations in this study with previous studies , , , showed that 69% (88 of 128) were not reported previously, although 93% (82 of 88) of the new findings involved metabolites analyzed before (Supplementary Table ; detailed description in the and the ). The 73 unique metabolite-associated genes were strongly overrepresented among genes known to be causative for IEMs (odds ratio = 10.6, P value = 1.9 × 10 −14 ; ), with 28 (38%) of them currently known to harbor causative mutations (Supplementary Table ). The QVs detected in our study of middle-aged and older adults were almost exclusively observed in the heterozygous state (Supplementary Data ). Detailed annotation of QVs in the two masks (Supplementary Table ) showed that 63 unique QVs in 15 genes and 73 unique QVs in 17 genes were listed in ClinVar as ‘pathogenic’ or ‘pathogenic or likely pathogenic’ for a corresponding monogenic disease. These observations support the notion that gene-based aggregation of rare, heterozygous, putatively damaging variants effectively identifies gene–metabolite relationships implicated in human diseases. Validation through independent, complementary approaches Independent replication of our findings is complicated by differences in QVs, metabolite quantification methods and different analytical choices across studies. We therefore validated our findings using four complementary approaches: first, the large UKB permitted analysis of the same rare QVs using the same analytical choices , as in our study for two overlapping metabolites, and showed very similar effect sizes for gene–metabolite associations (Fig. ). Second, the UKB proteomics data contain information on circulating levels of the encoded proteins of 17 genes implicated in our study. Burden tests aggregating protein-truncating and rare damaging variants revealed associations with lower levels of 15 of these proteins (in cis , P value < 1 × 10 −5 ; Fig. ) , potentially explained by nonsense-mediated decay. Third, comparison of our findings to those from a previous study of the plasma metabolome showed highly correlated effect sizes with those from our study, both on the variant level and the aggregated level (Spearman correlation coefficient > 0.8; Fig. and Supplementary Table ). Lastly, we performed a proof-of-concept experimental validation study for an implicated gene–metabolite relationship. The B 0 AT1 transporter, encoded by SLC6A19 , is responsible for the uptake of neutral amino acids across the apical membrane of intestinal and kidney epithelial cells . In addition to associations with the levels of the known substrates asparagine, histidine and tryptophan, we also detected associations with methionine sulfone, not yet reported as a substrate. Transport studies in CHO cells overexpressing human SLC6A19 and its co-chaperone collectrin (CLTRN) in comparison to the control indeed confirmed methionine sulfone to be a substrate of the transporter in vitro, in a similar concentration range as its known substrate isoleucine (Fig. and the ). Specificity was shown by complete inhibition of transport activity upon application of the SLC6A19 inhibitor cinromide (Fig. ). Together, these four complementary lines of evidence all support the validity of the detected associations. Prioritization and characteristics of driver variants We next performed a forward selection procedure to assess the contribution of individual QVs to their gene-based association signals . Plots that visualize the association P value based on the successive aggregation of the most influential QVs (Supplementary Data ) revealed noteworthy differences across genes and metabolites, with examples detailed in the . The inclusion of effectively neutral variants among the QVs may dilute their joint signal. We thus prioritized the variants with the strongest individual contributions that resulted in the lowest possible association P value when aggregated for burden testing as ‘driver variants’ . For each significant association signal, we identified at least two and up to 48 driver variants (median of 13; Supplementary Data and Supplementary Tables and ). The proteins encoded by the vast majority of identified genes are directly involved in the generation, turnover or transport of the associated metabolite(s). It is therefore a reasonable assumption that truly functional variants are those with the strongest individual contributions to the association signal with the implicated metabolite. Indeed, the minimum association P value based on only driver variants was often many orders of magnitude lower than the one obtained from all QVs, as exemplified by DPYD and plasma uracil (Supplementary Data ). As expected, the proportion of splice, stop-gain and frameshift variants was higher among driver QVs, whereas nondriver QVs contained a greater proportion of missense variants (Fisher’s exact test, P value = 1.3 × 10 −6 ; Extended Data Fig. ). The median effect of driver variants on metabolite levels increased from missense over start/stop lost, frameshift and stop-gain variants to variants predicted to affect splicing (Extended Data Fig. ). The median effect of drivers also increased with lower minor allele count and differed substantially from the one of nondrivers in each minor allele count bin (Extended Data Fig. ). Lastly, evaluation of the convergence of rare and common variant association signals showed that the associations of rare and common variants in the same region with a given metabolite were independent ( , Supplementary Table and Extended Data Fig. ). Heterozygous variants inform about dose–response effects The identification of known IEM-causing variants such as in CTH , PAH , SLC6A19 and SLC7A9 (Supplementary Table ) in the heterozygous state supports the notion that heterozygous QVs are functional alleles that lead to more extreme metabolic changes when present homozygously. For three genes with a homozygous QV present in more than one individual in our study, homozygous individuals tended to have more extreme metabolite levels than heterozygous ones (Extended Data Fig. ), supporting a dose–response effect. Moreover, we had previously confirmed experimentally that heterozygous sulfate-associated QVs in SLC26A1 detected by aggregate variant testing are indeed LoF alleles and that the encoded protein is an important player in human sulfate homeostasis . However, experimental studies of each of the 2,077 QVs and 73 genes detected here are infeasible, and IEMs are so rare that no homozygous person for a given gene may have been observed yet. We therefore used three orthogonal approaches: examination of hemizygosity, in silico knockout modeling and investigation of variants prioritized through allelic series, to evaluate whether the observed metabolite-associated heterozygous variants captured similar information about a gene’s function as might be derived from homozygous damaging variants in the respective gene. X chromosomal genes as a readout of variant homozygosity Genes in the non-pseudo-autosomal region of the X chromosome offer an opportunity to study differences between heterozygous women and effectively homozygous (that is, hemizygous) men. We therefore investigated sex differences for the two X chromosomal genes identified in our screen, TMLHE and RGN (Supplementary Table ). Indeed, male carriers of QVs in TMLHE showed clearly higher urine levels of N 6 , N 6 , N 6 -trimethyllysine, the substrate of the encoded enzyme trimethyllysine dioxygenase, than female carriers as well as markedly lower levels of its product hydroxy- N 6 , N 6 , N 6 -trimethyllysine, especially when focusing on driver variants (Fig. and Supplementary Table ). In plasma, male QV carriers showed 1.15 s.d. lower levels of plasma hydroxy- N 6 , N 6 , N 6 -trimethyllysine than noncarriers ( P value = 6 × 10 −44 ), whereas female QV carriers only showed 0.45 s.d. lower metabolite levels than noncarriers ( P value = 3 × 10 −4 ). A similar tendency was observed for RGN and urine levels of the unnamed metabolite X-23436. Levels were higher among both male and female carriers (Supplementary Table ), suggesting that X-23436 is a metabolite upstream of the reaction catalyzed by the encoded regucalcin. Data from the GTEx Project show no sex differences in gene expression across tissues. Hence, sex-differential effects on metabolite levels likely represent a dose–response effect resulting from heterozygosity versus hemizygosity of the involved QVs. Virtual IEMs mirror the effects of heterozygous variants We next investigated the implicated genes’ LoF by generating virtual IEMs for 24 genes that covered 60 gene–metabolite pairs via in silico knockout modeling ( and Extended Data Fig. ). We compared the maximal secretion flux of the implicated metabolite into blood and/or urine between the wild-type WBM and the gene-knockout WBM. Initially, the direction of the observed gene–metabolite associations was correctly predicted by virtual IEMs with an accuracy of 73.3% in the male WBM and 76.7% in the female WBM, which is significantly better than chance (Fisher’s exact test, P value = 3.3 × 10 −3 (male), P value = 1.5 × 10 −4 (female); Supplementary Table ). After model curation informed by the observed gene–metabolite associations, which included the addition of metabolites (for example, 8-methoxykynurenate) and pathways as well as alteration of constraints (for example, diet; details in the and Supplementary Table ), the number of modeled gene–metabolite associations increased to 67, and accuracy increased to 79.1% (male, P value = 2.1 × 10 −5 ) and 83.58% (female, P value = 2.9 × 10 −7 ). These findings underline the predictive nature of the virtual IEMs for the aggregated effects of heterozygous damaging variants and highlight opportunities to further improve WBMs by curation of the underlying knowledge base. Personalized WBMs capture observed metabolic changes Virtual IEMs as described above only allow for qualitative prediction. To additionally study an equivalent to observed effect sizes, we introduced a second modeling strategy (Extended Data Fig. ) as proof of principle, focusing on the gene KYNU . We successfully generated 569 microbiome-personalized WBMs and calculated the effect size of in silico KYNU knockout on metabolite excretion into urine against the natural variation induced by the personalized microbiomes (Supplementary Table ). Eighteen of 257 metabolites had a modeling P value < 0.05/257, implicating them as potential biomarkers of the corresponding IEM kynureninase deficiency (Supplementary Table ). The in silico effects of these 18 biomarkers, mostly belonging to tryptophan metabolism and the nicotinamide adenine dinucleotide (NAD) + de novo synthesis pathway, were significantly correlated with their observed counterparts (Supplementary Fig. ). Whereas two of the three metabolites with particularly large effects in both in silico modeling and the GCKD study, xanthurenate and 3-hydroxykynurenine, are known biomarkers of kynureninase deficiency , 8-methoxykynurenate was not. We therefore measured absolute levels of these metabolites in urine samples from a homozygous patient with kynureninase deficiency and her parents and confirmed that, in addition to xanthurenate and 3-hydroxykynurenine, 8-methoxykynurenate also constituted a biomarker of this IEM (Fig. and Extended Data Fig. ), consistent with the association statistics from aggregate tests of heterozygous variants from the GCKD study. A similar observation was made with regard to the gene PAH (Fig. , Supplementary Fig. and ). Thus, in silico knockout modeling of two proof-of-principle examples faithfully captured metabolic changes observed for heterozygous variants detected in population studies and for the corresponding recessively inherited IEMs. Metabolites represent intermediate readouts of human traits Allelic series describe a dose–response relationship, in which increasingly deleterious mutations in a gene result in increasingly larger effects on a trait or a disease. We hypothesized that genetic effects on metabolite levels should manifest as allelic series if the metabolite represents a molecular readout of an underlying (patho-)physiological process. As proof of principle, we investigated plasma sulfate because of solid evidence for causal gene–metabolite relationships: first, QVs in SLC13A1 showed a significant aggregate effect on lower plasma sulfate levels ( P value = 3 × 10 −18 , lowest possible P value = 2 × 10 −25 ). The observed association is well supported by experimental studies establishing that the encoded Na + –sulfate cotransporter NaS1 (SLC13A1) reabsorbs filtered sulfate at the apical membrane of kidney tubular epithelial cells . Second, we had previously confirmed experimentally that plasma sulfate-associated QVs in SLC26A1 are LoF alleles that lead to reduced sulfate transport , consistent with the aggregate effect of driver variants in SLC26A1 reaching a P value of 2 × 10 −11 for association with plasma sulfate (Extended Data Fig. ). The encoded sulfate transporter SAT1 localizes to basolateral membranes of tubular epithelial cells and works in series with NaS1 to mediate transcellular sulfate reabsorption (Fig. ) , . Based on a growth retardation phenotype in Slc13a1 -knockout mice and an association between SLC13A1 and lower sitting height in the UKB ( P value = 3 × 10 −8 ; Supplementary Tables and ), we investigated relations of six functional driver QVs in SLC13A1 and SLC26A1 with anthropometric measurements in the UKB . Supplementary Table contains traits with which at least two QVs showed nominally significant associations ( P value < 0.05). The genetic effect sizes on plasma sulfate levels in the GCKD study and both sitting and standing heights in the UKB were correlated (Pearson correlation coefficients of 0.57 and 0.70, respectively; Fig. ). These observations support a causal relationship between transcellular sulfate reabsorption and human height and designate plasma sulfate as an intermediate readout. Additionally, we observed significantly lower standing height among carriers of driver variants in SLC13A1 and SLC26A1 than among noncarriers in a subsample of the GCKD study ( N = 3,239) with measured height. The aggregated effect size of driver variants in SLC13A1 was −0.54 (corresponding to −5.17 cm when height was not inverse normal transformed, P value = 1.6 × 10 −3 ; Supplementary Fig. ). For SLC26A1 , we obtained even a stronger effect size of −0.73 (corresponding to −6.68 cm, P value = 1.7 × 10 −6 ; Supplementary Fig. ). The first patient homozygous for a LoF stop-gain mutation in SLC13A1 , p.Arg12*, has just been described . Aside from sitting height >2 s.d. below the normal range, the patient featured multiple skeletal abnormalities. Experimental transport studies as well as the patient’s fractional sulfate excretion of almost 100% establish this variant as a complete LoF, resulting in renal sulfate wasting. In this study, we found that, compared with noncarriers of p.Arg12*, heterozygous carriers showed 0.95 s.d. lower plasma sulfate levels (GCKD, 22 carriers, P value = 9.9 × 10 −10 ) and 0.08 s.d. lower sitting height (UKB, 2,480 carriers, P value = 2.2 × 10 −7 ). Plasma sulfate measurements from heterozygous carriers therefore are indicative of more extreme phenotypic changes in homozygous carriers. Variants altering sulfate uptake and musculoskeletal traits Rare LoF variants in SLC13A1 and SLC26A1 have been linked to individual musculoskeletal phenotypes through IEMs and GWASs , – . We further investigated the association between the same six functional, sulfate-associated QVs in SLC13A1 and SLC26A1 and musculoskeletal disorders, fractures and injuries in the UKB, for which at least two carriers with and without disease were present . There were 116 nominally significant ( P value < 0.05) associations with clinical traits and diseases, 113 of which were associated with increased odds of disease (Fig. ). For instance, the odds of various fractures ranged up to 30.7 (closed fracture of the neck, P value = 2.1 × 10 −8 , NaS1 p.Trp48*; Supplementary Table ). While the increased odds support a relationship between LoF variants in sulfate transporters and predisposition to several musculoskeletal disorders, the power to detect decreased odds was limited because of the rareness of the QVs and many of the disorders. UKB participants who carried more than one copy of any of the six QVs were investigated more closely. The rare allele, resulting in the p.Arg272Cys substitution in NaS1, was observed in nine heterozygous carriers in the GCKD study and prioritized because of its location in a splice region, its high impact on plasma sulfate levels and its particularly large effect on human height (Fig. ). In the UKB, we found 294 heterozygous carriers of p.Arg272Cys, four persons who carried both p.Arg272Cys in NaS1 and p.Leu348Pro in SAT1 and a single person homozygous for p.Arg272Cys. Age- and sex-specific z scores for human height showed a clear dose–response effect (Fig. and the ). The stronger effects among the four individuals heterozygous for LoF variants in each of the two transcellular sulfate reabsorption proteins as compared with heterozygous carriers of p.Arg272Cys only support additive effects across the pathway for human growth. Carrier status for NaS1 p.Arg272Cys was associated with increased odds of several musculoskeletal diseases such as back pain and intervertebral disk disorders as well as fractures (Fig. ). Homozygous persons were also identified for NaS1 p.Arg12* and SAT1 p.Leu348Pro, with similar findings (Extended Data Fig. ). Together, these findings provide strong support that genetic variants that proxy lower transcellular sulfate reabsorption are associated with human height and several musculoskeletal traits and diseases. Prioritizing variants with strong effects in allelic series for subsequent investigation in larger studies, even if the biomarker association rests on only a few heterozygous alleles, can therefore be an effective strategy to gain insights into the impact of rare damaging variants on human health. Relation of metabolite-associated genes to clinical traits A query of associations between the identified 2,077 QVs and 73 genes with thousands of quantitative and binary health outcomes using data from ~450,000 UKB participants revealed multiple biologically plausible significant and suggestive associations for genes (Supplementary Table ) and QVs (Supplementary Table ) but also less-studied relationships . The genes SLC47A1 , SLC6A19 , SLC7A9 and SLC22A7 were associated with one or more measures of kidney function and encode transport proteins highly expressed in the kidney – . Their localization at the apical – versus basolateral membrane of tubular epithelial kidney cells corresponded to the matrix (urine versus plasma) in which they left corresponding metabolic fingerprints. This observation illustrates that rare genetic variants associated with clinical markers of organ function can leave specific signatures in organ-adjacent biofluids that reflect their roles in cellular exchange processes. We identified 192 significant gene–metabolite pairs across both plasma ( P value < 5.04 × 10 −9 ) and urine ( P value < 4.46 × 10 −9 ), where 43 associations were detected in both (192 + 43 associations overall; Fig. and Supplementary Table ). These involved 73 unique genes and 179 metabolites, with a comparable number of genes and metabolites identified in plasma and urine. There were 22 and 17 genes with significant associations exclusively in plasma and in urine, respectively. While the majority of associations was detected with both masks, the more inclusive ‘HI_mis’ mask yielded more mask-specific associations than the ‘LoF_mis’ mask (Fig. ). Amino acids and lipids were the dominating pathways among the associated metabolites (Supplementary Fig. ). The higher proportion of implicated lipids in plasma than in urine is consistent with the absence of glomerular filtration of many lipids (Fig. ). Associations detected in both plasma and urine generally affected the levels of the implicated metabolite in the same direction (Fig. ). Sensitivity analyses evaluating additional masks and methods for aggregation testing (LoF only, sequence kernel association test (SKAT) and SKAT- optimal unified test (SKAT-O)) as well as sex-stratified and kidney function-stratified analyses supported the robustness of the main findings ( , Extended Data Figs. – and Supplementary Tables and ). Previous independent studies of associations between sequencing-based rare variants and metabolite levels obtained using comparable technology have focused on plasma and serum , , , . Comparison of the 128 discovered gene–plasma metabolite associations in this study with previous studies , , , showed that 69% (88 of 128) were not reported previously, although 93% (82 of 88) of the new findings involved metabolites analyzed before (Supplementary Table ; detailed description in the and the ). The 73 unique metabolite-associated genes were strongly overrepresented among genes known to be causative for IEMs (odds ratio = 10.6, P value = 1.9 × 10 −14 ; ), with 28 (38%) of them currently known to harbor causative mutations (Supplementary Table ). The QVs detected in our study of middle-aged and older adults were almost exclusively observed in the heterozygous state (Supplementary Data ). Detailed annotation of QVs in the two masks (Supplementary Table ) showed that 63 unique QVs in 15 genes and 73 unique QVs in 17 genes were listed in ClinVar as ‘pathogenic’ or ‘pathogenic or likely pathogenic’ for a corresponding monogenic disease. These observations support the notion that gene-based aggregation of rare, heterozygous, putatively damaging variants effectively identifies gene–metabolite relationships implicated in human diseases. Independent replication of our findings is complicated by differences in QVs, metabolite quantification methods and different analytical choices across studies. We therefore validated our findings using four complementary approaches: first, the large UKB permitted analysis of the same rare QVs using the same analytical choices , as in our study for two overlapping metabolites, and showed very similar effect sizes for gene–metabolite associations (Fig. ). Second, the UKB proteomics data contain information on circulating levels of the encoded proteins of 17 genes implicated in our study. Burden tests aggregating protein-truncating and rare damaging variants revealed associations with lower levels of 15 of these proteins (in cis , P value < 1 × 10 −5 ; Fig. ) , potentially explained by nonsense-mediated decay. Third, comparison of our findings to those from a previous study of the plasma metabolome showed highly correlated effect sizes with those from our study, both on the variant level and the aggregated level (Spearman correlation coefficient > 0.8; Fig. and Supplementary Table ). Lastly, we performed a proof-of-concept experimental validation study for an implicated gene–metabolite relationship. The B 0 AT1 transporter, encoded by SLC6A19 , is responsible for the uptake of neutral amino acids across the apical membrane of intestinal and kidney epithelial cells . In addition to associations with the levels of the known substrates asparagine, histidine and tryptophan, we also detected associations with methionine sulfone, not yet reported as a substrate. Transport studies in CHO cells overexpressing human SLC6A19 and its co-chaperone collectrin (CLTRN) in comparison to the control indeed confirmed methionine sulfone to be a substrate of the transporter in vitro, in a similar concentration range as its known substrate isoleucine (Fig. and the ). Specificity was shown by complete inhibition of transport activity upon application of the SLC6A19 inhibitor cinromide (Fig. ). Together, these four complementary lines of evidence all support the validity of the detected associations. We next performed a forward selection procedure to assess the contribution of individual QVs to their gene-based association signals . Plots that visualize the association P value based on the successive aggregation of the most influential QVs (Supplementary Data ) revealed noteworthy differences across genes and metabolites, with examples detailed in the . The inclusion of effectively neutral variants among the QVs may dilute their joint signal. We thus prioritized the variants with the strongest individual contributions that resulted in the lowest possible association P value when aggregated for burden testing as ‘driver variants’ . For each significant association signal, we identified at least two and up to 48 driver variants (median of 13; Supplementary Data and Supplementary Tables and ). The proteins encoded by the vast majority of identified genes are directly involved in the generation, turnover or transport of the associated metabolite(s). It is therefore a reasonable assumption that truly functional variants are those with the strongest individual contributions to the association signal with the implicated metabolite. Indeed, the minimum association P value based on only driver variants was often many orders of magnitude lower than the one obtained from all QVs, as exemplified by DPYD and plasma uracil (Supplementary Data ). As expected, the proportion of splice, stop-gain and frameshift variants was higher among driver QVs, whereas nondriver QVs contained a greater proportion of missense variants (Fisher’s exact test, P value = 1.3 × 10 −6 ; Extended Data Fig. ). The median effect of driver variants on metabolite levels increased from missense over start/stop lost, frameshift and stop-gain variants to variants predicted to affect splicing (Extended Data Fig. ). The median effect of drivers also increased with lower minor allele count and differed substantially from the one of nondrivers in each minor allele count bin (Extended Data Fig. ). Lastly, evaluation of the convergence of rare and common variant association signals showed that the associations of rare and common variants in the same region with a given metabolite were independent ( , Supplementary Table and Extended Data Fig. ). The identification of known IEM-causing variants such as in CTH , PAH , SLC6A19 and SLC7A9 (Supplementary Table ) in the heterozygous state supports the notion that heterozygous QVs are functional alleles that lead to more extreme metabolic changes when present homozygously. For three genes with a homozygous QV present in more than one individual in our study, homozygous individuals tended to have more extreme metabolite levels than heterozygous ones (Extended Data Fig. ), supporting a dose–response effect. Moreover, we had previously confirmed experimentally that heterozygous sulfate-associated QVs in SLC26A1 detected by aggregate variant testing are indeed LoF alleles and that the encoded protein is an important player in human sulfate homeostasis . However, experimental studies of each of the 2,077 QVs and 73 genes detected here are infeasible, and IEMs are so rare that no homozygous person for a given gene may have been observed yet. We therefore used three orthogonal approaches: examination of hemizygosity, in silico knockout modeling and investigation of variants prioritized through allelic series, to evaluate whether the observed metabolite-associated heterozygous variants captured similar information about a gene’s function as might be derived from homozygous damaging variants in the respective gene. Genes in the non-pseudo-autosomal region of the X chromosome offer an opportunity to study differences between heterozygous women and effectively homozygous (that is, hemizygous) men. We therefore investigated sex differences for the two X chromosomal genes identified in our screen, TMLHE and RGN (Supplementary Table ). Indeed, male carriers of QVs in TMLHE showed clearly higher urine levels of N 6 , N 6 , N 6 -trimethyllysine, the substrate of the encoded enzyme trimethyllysine dioxygenase, than female carriers as well as markedly lower levels of its product hydroxy- N 6 , N 6 , N 6 -trimethyllysine, especially when focusing on driver variants (Fig. and Supplementary Table ). In plasma, male QV carriers showed 1.15 s.d. lower levels of plasma hydroxy- N 6 , N 6 , N 6 -trimethyllysine than noncarriers ( P value = 6 × 10 −44 ), whereas female QV carriers only showed 0.45 s.d. lower metabolite levels than noncarriers ( P value = 3 × 10 −4 ). A similar tendency was observed for RGN and urine levels of the unnamed metabolite X-23436. Levels were higher among both male and female carriers (Supplementary Table ), suggesting that X-23436 is a metabolite upstream of the reaction catalyzed by the encoded regucalcin. Data from the GTEx Project show no sex differences in gene expression across tissues. Hence, sex-differential effects on metabolite levels likely represent a dose–response effect resulting from heterozygosity versus hemizygosity of the involved QVs. We next investigated the implicated genes’ LoF by generating virtual IEMs for 24 genes that covered 60 gene–metabolite pairs via in silico knockout modeling ( and Extended Data Fig. ). We compared the maximal secretion flux of the implicated metabolite into blood and/or urine between the wild-type WBM and the gene-knockout WBM. Initially, the direction of the observed gene–metabolite associations was correctly predicted by virtual IEMs with an accuracy of 73.3% in the male WBM and 76.7% in the female WBM, which is significantly better than chance (Fisher’s exact test, P value = 3.3 × 10 −3 (male), P value = 1.5 × 10 −4 (female); Supplementary Table ). After model curation informed by the observed gene–metabolite associations, which included the addition of metabolites (for example, 8-methoxykynurenate) and pathways as well as alteration of constraints (for example, diet; details in the and Supplementary Table ), the number of modeled gene–metabolite associations increased to 67, and accuracy increased to 79.1% (male, P value = 2.1 × 10 −5 ) and 83.58% (female, P value = 2.9 × 10 −7 ). These findings underline the predictive nature of the virtual IEMs for the aggregated effects of heterozygous damaging variants and highlight opportunities to further improve WBMs by curation of the underlying knowledge base. Virtual IEMs as described above only allow for qualitative prediction. To additionally study an equivalent to observed effect sizes, we introduced a second modeling strategy (Extended Data Fig. ) as proof of principle, focusing on the gene KYNU . We successfully generated 569 microbiome-personalized WBMs and calculated the effect size of in silico KYNU knockout on metabolite excretion into urine against the natural variation induced by the personalized microbiomes (Supplementary Table ). Eighteen of 257 metabolites had a modeling P value < 0.05/257, implicating them as potential biomarkers of the corresponding IEM kynureninase deficiency (Supplementary Table ). The in silico effects of these 18 biomarkers, mostly belonging to tryptophan metabolism and the nicotinamide adenine dinucleotide (NAD) + de novo synthesis pathway, were significantly correlated with their observed counterparts (Supplementary Fig. ). Whereas two of the three metabolites with particularly large effects in both in silico modeling and the GCKD study, xanthurenate and 3-hydroxykynurenine, are known biomarkers of kynureninase deficiency , 8-methoxykynurenate was not. We therefore measured absolute levels of these metabolites in urine samples from a homozygous patient with kynureninase deficiency and her parents and confirmed that, in addition to xanthurenate and 3-hydroxykynurenine, 8-methoxykynurenate also constituted a biomarker of this IEM (Fig. and Extended Data Fig. ), consistent with the association statistics from aggregate tests of heterozygous variants from the GCKD study. A similar observation was made with regard to the gene PAH (Fig. , Supplementary Fig. and ). Thus, in silico knockout modeling of two proof-of-principle examples faithfully captured metabolic changes observed for heterozygous variants detected in population studies and for the corresponding recessively inherited IEMs. Allelic series describe a dose–response relationship, in which increasingly deleterious mutations in a gene result in increasingly larger effects on a trait or a disease. We hypothesized that genetic effects on metabolite levels should manifest as allelic series if the metabolite represents a molecular readout of an underlying (patho-)physiological process. As proof of principle, we investigated plasma sulfate because of solid evidence for causal gene–metabolite relationships: first, QVs in SLC13A1 showed a significant aggregate effect on lower plasma sulfate levels ( P value = 3 × 10 −18 , lowest possible P value = 2 × 10 −25 ). The observed association is well supported by experimental studies establishing that the encoded Na + –sulfate cotransporter NaS1 (SLC13A1) reabsorbs filtered sulfate at the apical membrane of kidney tubular epithelial cells . Second, we had previously confirmed experimentally that plasma sulfate-associated QVs in SLC26A1 are LoF alleles that lead to reduced sulfate transport , consistent with the aggregate effect of driver variants in SLC26A1 reaching a P value of 2 × 10 −11 for association with plasma sulfate (Extended Data Fig. ). The encoded sulfate transporter SAT1 localizes to basolateral membranes of tubular epithelial cells and works in series with NaS1 to mediate transcellular sulfate reabsorption (Fig. ) , . Based on a growth retardation phenotype in Slc13a1 -knockout mice and an association between SLC13A1 and lower sitting height in the UKB ( P value = 3 × 10 −8 ; Supplementary Tables and ), we investigated relations of six functional driver QVs in SLC13A1 and SLC26A1 with anthropometric measurements in the UKB . Supplementary Table contains traits with which at least two QVs showed nominally significant associations ( P value < 0.05). The genetic effect sizes on plasma sulfate levels in the GCKD study and both sitting and standing heights in the UKB were correlated (Pearson correlation coefficients of 0.57 and 0.70, respectively; Fig. ). These observations support a causal relationship between transcellular sulfate reabsorption and human height and designate plasma sulfate as an intermediate readout. Additionally, we observed significantly lower standing height among carriers of driver variants in SLC13A1 and SLC26A1 than among noncarriers in a subsample of the GCKD study ( N = 3,239) with measured height. The aggregated effect size of driver variants in SLC13A1 was −0.54 (corresponding to −5.17 cm when height was not inverse normal transformed, P value = 1.6 × 10 −3 ; Supplementary Fig. ). For SLC26A1 , we obtained even a stronger effect size of −0.73 (corresponding to −6.68 cm, P value = 1.7 × 10 −6 ; Supplementary Fig. ). The first patient homozygous for a LoF stop-gain mutation in SLC13A1 , p.Arg12*, has just been described . Aside from sitting height >2 s.d. below the normal range, the patient featured multiple skeletal abnormalities. Experimental transport studies as well as the patient’s fractional sulfate excretion of almost 100% establish this variant as a complete LoF, resulting in renal sulfate wasting. In this study, we found that, compared with noncarriers of p.Arg12*, heterozygous carriers showed 0.95 s.d. lower plasma sulfate levels (GCKD, 22 carriers, P value = 9.9 × 10 −10 ) and 0.08 s.d. lower sitting height (UKB, 2,480 carriers, P value = 2.2 × 10 −7 ). Plasma sulfate measurements from heterozygous carriers therefore are indicative of more extreme phenotypic changes in homozygous carriers. Rare LoF variants in SLC13A1 and SLC26A1 have been linked to individual musculoskeletal phenotypes through IEMs and GWASs , – . We further investigated the association between the same six functional, sulfate-associated QVs in SLC13A1 and SLC26A1 and musculoskeletal disorders, fractures and injuries in the UKB, for which at least two carriers with and without disease were present . There were 116 nominally significant ( P value < 0.05) associations with clinical traits and diseases, 113 of which were associated with increased odds of disease (Fig. ). For instance, the odds of various fractures ranged up to 30.7 (closed fracture of the neck, P value = 2.1 × 10 −8 , NaS1 p.Trp48*; Supplementary Table ). While the increased odds support a relationship between LoF variants in sulfate transporters and predisposition to several musculoskeletal disorders, the power to detect decreased odds was limited because of the rareness of the QVs and many of the disorders. UKB participants who carried more than one copy of any of the six QVs were investigated more closely. The rare allele, resulting in the p.Arg272Cys substitution in NaS1, was observed in nine heterozygous carriers in the GCKD study and prioritized because of its location in a splice region, its high impact on plasma sulfate levels and its particularly large effect on human height (Fig. ). In the UKB, we found 294 heterozygous carriers of p.Arg272Cys, four persons who carried both p.Arg272Cys in NaS1 and p.Leu348Pro in SAT1 and a single person homozygous for p.Arg272Cys. Age- and sex-specific z scores for human height showed a clear dose–response effect (Fig. and the ). The stronger effects among the four individuals heterozygous for LoF variants in each of the two transcellular sulfate reabsorption proteins as compared with heterozygous carriers of p.Arg272Cys only support additive effects across the pathway for human growth. Carrier status for NaS1 p.Arg272Cys was associated with increased odds of several musculoskeletal diseases such as back pain and intervertebral disk disorders as well as fractures (Fig. ). Homozygous persons were also identified for NaS1 p.Arg12* and SAT1 p.Leu348Pro, with similar findings (Extended Data Fig. ). Together, these findings provide strong support that genetic variants that proxy lower transcellular sulfate reabsorption are associated with human height and several musculoskeletal traits and diseases. Prioritizing variants with strong effects in allelic series for subsequent investigation in larger studies, even if the biomarker association rests on only a few heterozygous alleles, can therefore be an effective strategy to gain insights into the impact of rare damaging variants on human health. A query of associations between the identified 2,077 QVs and 73 genes with thousands of quantitative and binary health outcomes using data from ~450,000 UKB participants revealed multiple biologically plausible significant and suggestive associations for genes (Supplementary Table ) and QVs (Supplementary Table ) but also less-studied relationships . The genes SLC47A1 , SLC6A19 , SLC7A9 and SLC22A7 were associated with one or more measures of kidney function and encode transport proteins highly expressed in the kidney – . Their localization at the apical – versus basolateral membrane of tubular epithelial kidney cells corresponded to the matrix (urine versus plasma) in which they left corresponding metabolic fingerprints. This observation illustrates that rare genetic variants associated with clinical markers of organ function can leave specific signatures in organ-adjacent biofluids that reflect their roles in cellular exchange processes. We performed a comprehensive screen of the aggregate effect of rare, putatively damaging variants on the levels of 1,294 plasma and 1,396 urine metabolites from paired specimens of 4,737 persons. The majority of the 192 identified gene–metabolite relationships have not been reported yet – , and include plasma- and urine-exclusive associations that reflect organ function. The findings were validated through primary data analysis for metabolites available in the UKB, investigation of previously published summary statistics from sequencing-based genetic studies of the plasma metabolome, integration of orthogonal plasma proteomics data and proof-of-concept experimental studies that confirmed a new metabolite association with the transport protein encoded by SLC6A19 . We show, via several genetic, computational and experimental approaches that the rare, almost exclusively heterozygous metabolite-associated variants in our study capture similar information about a gene’s function as can be obtained from the study of rare IEMs but are observed much more frequently and permit insights into graded effects of impaired gene function. First, 38% of identified genes in our study are known to harbor causative mutations for autosomal recessively inherited IEMs that often exhibit concordant but more extreme changes in the implicated metabolite, as exemplified by elevated urine levels of cystine in cystinuria (MIM 220100, SLC7A9 ) or tryptophan in Hartnup disease (MIM 234500, SLC6A19 ). Second, men exhibited significantly larger effects of rare QVs in non-pseudo-autosomal X chromosomal genes on metabolite levels than women. This observation is consistent with male hemizygosity as an approximation of female homozygosity for a given variant and with the known greater penetrance and severity of X-linked disorders in men than in women . Third, in silico knockout in a virtual metabolic human, that is, full loss of gene function, was predictive for observed metabolic changes associated with variant heterozygosity. Predicted changes on metabolite levels upon in silico gene knockout were also reflected in absolute metabolite quantification of patients with IEM homozygous for a LoF mutation in the respective genes, KYNU and PAH . Thus, deterministic, knowledge-based in silico modeling generated context for better biological interpretation also of heterozygous variants, while genetic screens of metabolite levels in population studies permit the identification of knowledge gaps and errors in WBMs. Our modeling pipeline for generating virtual IEMs, which we make publicly available to substantiate evidence from rare variant aggregation tests, will constitute a valuable resource in particular to scrutinize genes for which an IEM has yet to be observed. Fourth, the presence of different causal QVs affecting a given metabolic reaction or pathway enabled the investigation of allelic series. The resulting dose–response relationships proxy a range of target inhibition, which represents desirable information for drug development and is relevant because enzymes and transporters are attractive drug targets. Plasma sulfate-associated functional QVs in SLC13A1 and SLC26A1 showed a clear dose–response effect between the degree of genetically inferred impaired transcellular sulfate reabsorption and lower human height. This observation is biologically plausible, because defects in genes linked to sulfate biology often result in perturbed skeletal growth and development . In particular, constitutive knockouts of Slc13a1 and Slc26a1 in mice do not only cause hyposulfatemia and renal sulfate wasting , but also general growth retardation in Slc13a1 -knockout mice . Interestingly, the missense variant p.Thr185Met in SAT1 exhibited the largest effect on sulfate. We have previously shown experimentally a dominant negative mechanism of this variant , providing another mechanism of how heterozygous variants may promote insights into an effectively full loss of gene function. Moreover, our findings for the p.Arg272Cys variant in NaS1 show that even very few, heterozygous copies of a metabolite-prioritized QV can give rise to the detection of homozygous individuals and hitherto unreported disease associations in subsequent larger studies. These observations suggest that the importance of impaired transcellular epithelial sulfate transport for musculoskeletal diseases, fractures and injuries deserves additional study and should be further substantiated through conditional or mediation analyses if plasma sulfate levels become available in the UKB. Potential limitations of our study include a focus on participants of European ancestry with moderately reduced kidney function, potential violations of assumptions underlying burden tests, in silico prediction of QV pathogenicity and of whole-body modeling and the use of semi-quantitative rather than absolute metabolite levels. Arguments mitigating each of these concerns are detailed in the . In conclusion, exome-wide population studies of rare, putative LoF variants can reveal potentially causal relationships with metabolites and highlight metabolic biomarkers informative of the degree of impaired gene function that can translate into graded associations with human traits. Study design and participants The GCKD study is an ongoing prospective cohort study of 5,217 participants with moderate chronic kidney disease who were enrolled from 2010 to 2012 and are under regular nephrologist care. Inclusion criteria were an age between 18 and 74 years and an eGFR between 30 and 60 ml min −1 per 1.73 m 2 or an eGFR >60 ml min −1 per 1.73 m 2 with a UACR >300 mg per g or with a urinary protein-to-creatinine ratio >500 mg per g . Biomaterials, including blood and urine, were collected at the baseline visit, processed and shipped frozen to a central biobank for storage at −80 °C . Details on the study design and participant characteristics have been published , . The GCKD study was registered in the national registry for clinical studies (DRKS 00003971) and approved by local ethics committees of the participating institutions . All participants provided written informed consent. Whole-exome sequencing and quality control Genomic DNA was extracted from whole blood and underwent paired-end 100-bp WES at Human Longevity, using the IDT xGen version 1 capture kit on the Illumina NovaSeq 6000 platform. More than 97% of consensus coding sequence (CCDS) release 22 (ref. ) had at least 10-fold coverage. The average coverage of the CCDS was 141-fold read depth. Exomes were processed from their unaligned FASTQ state in a custom-built cloud compute platform using the Illumina DRAGEN Bio-IT Platform Germline Pipeline version 3.0.7 at AstraZeneca’s Centre for Genomics Research, including alignment of reads to the GRCh38 reference genome ( https://ftp.ncbi.nlm.nih.gov/genomes/all/GCA/000/001/405/GCA_000001405.15_GRCh38/ ) and variant calling . Sample-level quality control included removal of samples from participants who withdrew consent, duplicated samples, those with an estimated VerifyBamID contamination level >4% , samples with inconsistency between reported and genetically predicted sex, samples not having chromosomes XX or XY, samples having <94.5% of CCDS release 22 bases covered with ≥10-fold coverage , related samples with kinship >0.884 (KING, kinship version 2.2.3) and samples with a missing call rate >0.03. Furthermore, only samples with available high-quality DNA microarray genotype data and without outlying values (>8 s.d.) along any of the first ten genetic principle components from a principal component analysis were kept, for a final sample size of 4,779 samples. Variant-level quality control was performed similar to that in ref. , excluding variants with coverage <10, heterozygous variants with a one-sided binomial exact test P value <1 × 10 −6 for Hardy–Weinberg equilibrium, variants with a genotype quality score <30, single-nucleotide variants with a Fisher’s strand bias score (FS) >60 and insertions and deletions with an FS >200, variants with a mapping quality score <40, those with a quality score <30, variants with a read position rank-sum score <−2, those with a mapping quality rank-sum score <−8, variants that did not pass the DRAGEN calling algorithm filters, heterozygous genotype called variants based on an alternative allele read ratio <0.2 or >0.8 and variants with a missing call rate >10% among all remaining samples. This resulted in 1,038,062 variants across the autosomes and the X chromosome. Variant and gene annotation Variants from WES were annotated using the Variant Effect Predictor (VEP) version 101 (ref. ) with standard settings, including the canonical transcript, gene symbol and variant frequencies from gnomAD version 2.1 ( https://gnomad.broadinstitute.org/ ). VEP plugins were used to add the REVEL (version 2020-5) and CADD (version 3.0) scores and to downgrade LoF variants using LOFTEE (version 2020-8) . Furthermore, we added multiple in silico prediction scores using dbNSFP version 4.1a . For interpretation, genes were annotated for their potential function as enzymes using UniProt ( https://www.uniprot.org/ ) and as transporters using data from Gyimesi and Hediger . Metabolite identification and quantification Metabolite levels were quantified from stored plasma and spot urine as published by Schlosser et al. . In brief, nontargeted mass spectrometry analysis was conducted at Metabolon. Metabolites were identified by automated comparison of the ion features in the experimental sample to a reference library of chemical standards. Known metabolites reported in this study were identified with the highest confidence level of identification of the Metabolomics Standards Initiative , , unless marked with an asterisk. Unnamed biochemicals of unknown structural identity were identified by virtue of their recurrent nature. For peak quantification, the area under the curve was used, followed by normalization to account for interday instrument variation. Data cleaning of quantified metabolites Data cleaning, quality control, filtering and normalization of quantified metabolites in plasma and urine in the GCKD study were performed using an in-house pipeline . Samples and metabolites were evaluated for duplicates; missing and outlying values and metabolites with low variance were excluded. Levels of urine metabolites were normalized using the probabilistic quotient derived from 309 endogenous metabolites with <1% missing values to account for differences in urine dilution. After removing metabolites with <300 individuals with WES data, the remaining 1,294 plasma and 1,396 urine metabolites (Supplementary Table ) were inverse normal transformed before gene-based aggregation testing. Therefore, effect sizes based on effects of aggregated rare variants on the semi-quantitative metabolite measurements have 1 s.d. as a unit. Additional variables Serum and urine creatinine were measured using an IDMS-traceable enzymatic assay (Creatinine Plus, Roche). Serum and urine albumin levels were measured using the Tina-quant assay (Roche–Hitachi Diagnostics). GFR was estimated with the CKD-EPI formula from serum creatinine. UACR was calculated using urinary albumin and creatinine measurements. Full information on WES data, covariates and metabolites was available for 4,713 persons regarding plasma metabolites and for 4,619 persons regarding urine metabolites. Genetic principal components were derived based on principal component analysis on the basis of genotype data using flashpca . Rare variant aggregation testing on metabolite levels We performed burden tests to combine the effects of rare, putatively damaging variants within a gene on metabolite levels assuming a LoF mechanism that results in concordant effect directions on metabolite levels . The selection of high-quality QVs into masks based on their frequency and annotated properties is a state-of-the-art approach in variant aggregation studies . Annotations from VEP version 101 (ref. ) were used to select QVs within each gene for aggregation in burden tests. Because genetic architectures of damaging variants vary across genes, two complementary masks for the selection of QVs were defined. Both masks were restricted to contain only rare variants in canonical transcripts with MAF <1%. All variants that were predicted to be either high-confidence LoF variants or missense variants with a MetaSVM score >0 or in-frame nonsynonymous variants with a fathmm-XF-coding score >0.5 were aggregated into the first mask, termed LoF_mis. The second mask, termed HI_mis, contained all variants that were predicted either to have a high-impact consequence defined by VEP (transcript ablation, splice acceptor variant, splice donor variant, stop-gain variant, frameshift variant, start/stop lost variant, and transcript amplification) or to be missense variants with a REVEL score >0.5, a CADD PHRED score >20 or an M-CAP score >0.025. Only genes with an HGNC symbol that were not read-throughs and that contained more than three QVs in at least one of the masks were kept for testing, resulting in 16,525 analyzed genes. Burden tests were carried out as implemented in the seqMeta R package version 1.6.7 (ref. ), adjusting for age, sex, ln(eGFR) and the first three genetic principal components as well as serum albumin for plasma metabolites and ln(UACR) for urinary metabolites, respectively . Genotypes were coded as the number of copies of the rare allele (0, 1, 2) on the autosomes and also on the X chromosome for women. For men, genotypes in the non-pseudo-autosomal region of the X chromosome were coded as (0, 2). Statistical significance was defined as nominal significance corrected for the number of tested genes and principal components that explained more than 95% of the metabolites’ variance (0.05/16,525/600 = 5.04 × 10 −9 in plasma, 0.05/16,525/679 = 4.46 × 10 −9 in urine). For significant gene–metabolite associations, single-variant association tests between each QV in the respective mask and the corresponding metabolite levels were performed under additive modeling, adjusting for the same covariates using the seqMeta R package version 1.6.7 (ref. ). Sensitivity analyses that evaluated all significant gene–metabolite pairs with regard to additional gene-based tests as well as across strata of sex and kidney function are summarized in the and Supplementary Tables and . Assessment of QV contributions and driver variants The investigation of the genetic architecture underlying gene–metabolite associations and the prioritization of QVs according to their contribution to the gene-based association signal were performed using the forward selection procedure from Bomba et al. . First, for each QV v , the P value P v is calculated by performing the burden test aggregating all QVs other than v . Second, for each QV v , the difference Δ v between P v and the total P value of the burden test including all QVs is calculated. Subsequently, QVs are ranked by the magnitude of Δ v . QVs not contributing to the gene signal or even having an opposite effect can provide a negative Δ v . Finally, burden tests are performed by adding the ranked QVs one after the other until the lowest P value is reached, starting with the greatest Δ v . This identified a set of QVs that contained only variants that contributed most to the gene–metabolite association signal (that is, led to a stronger association signal) and did not contain variants that introduced noise (that is, neutral variants or those with a small or even opposite effect on metabolite levels). The resulting set of selected variants that led to the lowest possible association P value was designated ‘driver variants’ for the respective gene–metabolite association. Driver variants within a gene might differ for different associated metabolites. Relation of QVs in SLC13A1 and SLC26A1 to musculoskeletal traits WES and biomedical data of the UKB were used to investigate allelic series of functional QVs in SLC13A1 and SLC26A1 with hypothesized related clinical traits and diseases. We focused on SLC13A1 driver variants with experimental validation or that likely result in a severe consequence (stop-gain, splicing) to select truly functional QVs. Among these, the stop-gain variant encoding p.Arg12*, for which a complete LoF has experimentally been validated , the stop-gain substitution p.Trp48*, for which associations with decreased serum sulfate levels and skeletal phenotypes were reported, and the missense variant encoding p.Arg272Cys, located in a splice region, were available in the UKB. For SLC26A1 , we selected driver QVs for which reduced sulfate transport activity had previously been shown , of which p.Leu384Pro, p.Ser358Leu and p.Thr185Met were available in the UKB. All 6 QVs passed the ‘90pct10dp’ QC filter, defined as at least 90% of all genotypes for a given variant, independent of variant allele zygosity, had a read depth of at least 10 ( https://biobank.ndph.ox.ac.uk/ukb/ukb/docs/UKB_WES_AnalysisBestPractices.pdf ). Analyses were performed on the UKB Research Analysis Platform. Participants with all ancestries were included into the analysis but excluding strongly related individuals, defined as those that were excluded from the kinship inference process and those with ten or more third-degree relatives. After individual-level filtering, 468,292 individuals remained for analyses. Of these, ten participants were homozygous for one of the six QVs and 7,280 persons were heterozygous for at least one of the QVs. For these homozygous or heterozygous persons, we determined age- and sex-specific z scores of their quantitative anthropometric measurements, enabling interpretation of their measurements compared with noncarriers of the same age and sex. Age- and sex-specific distributions were inverse normal transformed before calculating z scores. The association between each of the six functional QVs with medical diagnoses defined by International Classification of Diseases version 10 (ICD-10) codes based on UKB field 41202 (primary or main diagnosis codes across hospital inpatient records) was investigated. We selected musculoskeletal diseases (ICD-10 codes starting with ‘M’) and fractures and injuries (ICD-10 codes starting with ‘S’ and containing ‘fracture’, ‘dislocation’ or ‘sprain’ terms). To avoid unreliable estimates, traits were restricted to those with at least two rare variant carriers among both individuals with and without disease. The association was examined using Fisher’s exact test under dominant modeling and Firth regression under additive modeling (‘brglm2’ R package ). We included sex, age at recruitment, sex × age and the first 20 genetic principal components (UKB field 22009) as covariates in the regression model. The association with quantitative anthropometric traits was assessed after inverse normal transformation via linear regression, additive genotype modeling and adjusting for the same covariates. Gene-based tests for metabolite associations in the UK Biobank We performed gene-based tests for significantly associated metabolites available in the UKB to validate our findings using the same settings for analysis as those in our study. Because metabolite levels in the UKB were quantified by Nightingale Health’s metabolic biomarker platform focusing on lipids, only two (histidine and phenylalanine) of the 122 significantly associated plasma metabolites were available. Histidine and phenylalanine values (UKB data fields 23463 and 23468) were inverse normal transformed. Sample and variant QC was performed, and covariates were included as described in the previous paragraph. A total of 260,000 individuals were available for analysis. Association analysis for the two identified gene–metabolite pairs, histidine and HAL as well as phenylalanine and PAH , was performed based on burden tests as implemented in REGENIE version 3.3 in two steps using the HI_mis mask, selecting only QVs that were present in the GCKD study to ensure reproducibility of rare variant effects between the studies. Setup of the whole-body model and mapping The sex-specific and organ-resolved WBM covers 13,543 unique metabolic reactions and 4,140 unique metabolites based on the generic genome-scale reconstruction of human metabolism, Recon3D , and adequate physiological and coupling constraints , . Of all observed significant gene–metabolite pairs from the GCKD study, 51 genes and 69 metabolites could be mapped onto Recon3D. For 36 of 51 genes, their associated metabolites could be mapped, resulting in 69 unique gene–metabolite pairs. To investigate perturbations in gene G , we first identified all reactions [12pt]{minimal} $${R}_{G}=\{{r}_{{G}_{1}}, ,{r}_{{G}_{n}}\}$$ R G = { r G 1 , … , r G n } of the corresponding encoded enzymes or transporters in the WBM . We included those genes (27 of 36) in the generation of virtual IEMs that were exclusively causal for a non-empty set of reactions (that is, for a gene G , associated with reactions [12pt]{minimal} $${R}_{G}=\{{r}_{{G}_{1}}, ,{r}_{{G}_{n}}\}$$ R G = { r G 1 , … , r G n } , there did not exist a gene H that was associated with any reaction of R G ) and metabolites with urinary excretion reactions, leading to the exclusion of SLC22A7 and SULT2A1 . In silico knockout modeling via linear programming Knockout simulations were based on maximizing the flux of the excretion or demand reaction of the metabolite of interest M under different conditions in a steady state setting ( Sv = 0 ), where S is the stoichiometric matrix (rows, metabolites; columns, reactions), and v is the flux vector through each reaction, adhering to specific constraints ( v l ≤ v ≤ v u ) , : 1 [12pt]{minimal} $$_{{}}{{}}^{T}{},\\ {}\,{}\,{}=,\\ {{}}_{{}} {} {{}}_{{}}.$$ max v c T v , subject to Sv = 0 , v l ≤ v ≤ v u . For simulating a wild-type model for gene G , we solved the linear programming (LP) problem stated in equation , choosing the linear objective as the sum of all corresponding fluxes of reactions in R G : 2 [12pt]{minimal} $$_{G}:=\, _{k=1}^{n}{v}_{{G}_{k}},\\ {}\,{}\,{}=,\\ {{}}_{{}} {} {{}}_{{}}.$$ S G : = max ∑ k = 1 n v G k , subject to Sv = 0 , v l ≤ v ≤ v u . First, we checked whether S G > 10 −6 , a criterion implemented in the function checkIEM_WBM of the PSCM toolbox for deciding whether the corresponding reactions could carry any flux , . All reactions except the TMLHE -associated reactions passed this criterion. Next, we maximized the flux of two key reactions: the urine excretion reaction (for example, EX M [ u ]) and the created unbounded demand reaction (for example, DM M [ bc ]), designed to reflect accumulation in the blood compartment. First, we unbounded the upper bound of the urine excretion reaction. Next, we maximized the corresponding fluxes of metabolite M as the LP problem stated in equation under the additional constraint that [12pt]{minimal} $${ }_{k=1}^{n}{v}_{{G}_{k}}={S}_{G}$$ ∑ k = 1 n v G k = S G , providing the maximal urine excretion and the maximal flux into blood given the constraint setting. Finally, to simulate the complete LoF, we blocked all reactions in all organs catalyzed by gene G by setting [12pt]{minimal} $${v}_{{G}_{1}}= ={v}_{{G}_{n}}=0.$$ v G 1 = … = v G n = 0 . We derived maximum fluxes as in the wild-type model. Subsequently, we tested whether the knockout resulted in an increase, a decrease or no change in EX M [ u ] and DM M [ bc ] for each mapped gene–metabolite pair that was significant in the GCKD cohort. From the initial 36 genes mapped onto Recon3D, 24 genes and their mapped metabolites fulfilled all criteria (exclusively causal, reactions of the genes carry flux, urinary excretion reaction present), leading to 60 modeled gene–metabolite pairs. After curation of the male and female models, 26 genes ( TMLHE and KYAT1 added) and 67 gene–metabolite pairs could be computed . LP simulations were carried out in Windows 10 using MATLAB 2021a (MathWorks) as the simulation environment, ILOG CPLEX version 12.9 (IBM) as the LP solver, the COBRA Toolbox version 3.4 (ref. ) and the PSCM toolbox . Microbiome personalization of whole-body models Microbiome-personalized WBMs were generated by creating community models based on the genome-scale reconstructions of microbes in the AGORA1 resource , . Models have been shown to accurately reflect aspects of the fecal host metabolome , . Briefly, from microbe identification and relative abundance data of a metagenomic sample, genome-scale reconstructions of the identified microbes are joined together and connected via a lumen compartment, where they can exchange metabolites to form a microbial community , . Each microbial community model is then integrated in the WBM by connecting the microbiota lumen compartment to the large intestinal lumen of the WBM. Microbial community models ( n = 616) were based on publicly available metagenomics data from Yachida et al. and then embedded into the male WBM to form 616 personalized WBMs. In silico knockout modeling using quadratic programming While maintaining the same conditions as outlined in equation , rather than maximizing a linear objective, we minimized a quadratic objective for each personalized WBM: 3 [12pt]{minimal} $$_{{}}}}^{T}{},\\ {}\,{}\,{}=,\\ {{}}_{{}} {} {{}}_{{}}.$$ min v 1 2 v T Qv , subject to Sv = 0 , v l ≤ v ≤ v u . Here, Q is a diagonal matrix, with 10 −6 on its diagonal, a value recommended in the COBRA Toolbox . Because of convexity attributes, equation allows for calculation of a unique flux distribution. For each solution v * , we obtained the corresponding urine excretion reactions of the measured and mapped metabolites. For knockout simulations, the associated reactions of gene G were set to zero ( [12pt]{minimal} $${v}_{{G}_{1}}= ={v}_{{G}_{n}}=0$$ v G 1 = … = v G n = 0 ). Then, equation was solved if possible. An optimal quadratic programming (QP) solution could be computed for 582 wild-type models, 590 KYNU -knockout WBMs and 588 PAH -knockout WBMs, which led to 569 paired QP– KYNU solutions and 567 paired QP– PAH solutions. We analyzed urine secretion fluxes for 257 metabolites covered in the GCKD urine metabolome data and 272 metabolites covered in the GCKD plasma metabolome data that had non-zero flux values. For KYNU , the urine compartment was analyzed, as biomarker quantification for the corresponding IEM is done in urine. Analogously for PAH , the blood metabolome data were analyzed as the clinically relevant compartment. The QP simulations were carried out using the high-performance computing facility, called the Brain-Cluster, of the University of Greifs-wald, employing MATLAB 2019b (MathWorks), ILOG CPLEX version 12.10 (IBM) as the quadratic programming solver and the COBRA Toolbox version 3.4 (ref. ). Statistical analysis of the in silico simulation results The Fisher–Freeman–Halton test was used to determine significance when comparing the in vivo and in silico signs from LP modeling. Statistical analysis of the QP solutions was conducted based on the paired wild-type and knockout fluxes via fixed-effect linear regression for panel data . We used ln(urine secretion flux) as the response variable, the knockout status as the sole predictor (wild type versus knockout) and the personalized microbiome as a fixed effect. Significance thresholds were set to 0.05/257 ( KYNU ) and 0.05/272 ( PAH ). Importantly, the entire variance in the regression models had two sources: (1) the knockout and (2) the microbiome personalization. Significance testing of the in silico regression coefficient of the knockout variable therefore delivers a test of whether the knockout explains substantial amounts of variance in comparison to the variance induced by randomly sampled microbiome communities. The in silico regression coefficients were then correlated with the burden-derived observed regression coefficients of gene–metabolite associations from the GCKD study, and significance was determined using the standard test for Pearson correlations. Experiments on transport activity of SLC6A19 Generation of cells Human SLC6A19 ( NM_001003841.3 → NP_001003841.1 ) and human CLTRN ( TMEM27 ) ( NM_020665.6 → NP_065716.1 ) cDNA was synthesized at Life Technologies Gene Art and cloned into a T-REx inducible expression vector. Both vectors were transfected into CHO T-REx cells and selected with neomycin and hygromycin. Mock cells were made by transfecting with only the TMEM27 vector and selection using hygromycin. Stable pools were then selected by measuring doxycycline-inducible uptake of neutral amino acids (for example, isoleucine) by measuring changes in membrane potential using the FLIPR Tetra system. The selected stable cell pools were then serially diluted to generate single-cell clones, which were subsequently selected based on function using the FLIPR assay and hSLC6A19 and hTMEM27 expression using qPCR. FLIPR membrane potential assay CHO T-REx cells stably expressing doxycycline-inducible hSLC6A19 and hTMEM27 were seeded in a 384-well plate and incubated overnight with 1 µg ml −1 doxycycline. The next day, cells were washed and then incubated with Tyrode’s buffer (sodium free) with FMP-Blue-Dye, which is a membrane potential dye, for 60 min. The cells were then incubated with standard Tyrode’s buffer (130 mM NaCl) with and without cinromide for 10 min before incubation with standard Tyrode’s buffer alone or with eight increasing concentrations of methionine sulfone and isoleucine, both with maximum concentrations of 30 mM. The FLIPR Tetra system was used to read FMP-Blue-Dye fluorescence as a measurement of membrane depolarization as a result of substrate-driven electrogenic net influx of Na + . Data were analyzed and represented in two ways: (1) for data comparison with the mock cell line, transport activity was presented as fold over non-substrate-driven signal with the formula (fluorescence signal − median of fluorescence signal with no substrate)/(median of fluorescence signal with no substrate); and (2) for data comparison with cinromide, transport activity was presented as a percent of maximum substrate-driven fluorescence signal with the formula 100 × (fluorescence signal − median of fluorescence signal with no substrate)/(median of fluorescence signal with substrate). Reporting summary Further information on research design is available in the linked to this article. The GCKD study is an ongoing prospective cohort study of 5,217 participants with moderate chronic kidney disease who were enrolled from 2010 to 2012 and are under regular nephrologist care. Inclusion criteria were an age between 18 and 74 years and an eGFR between 30 and 60 ml min −1 per 1.73 m 2 or an eGFR >60 ml min −1 per 1.73 m 2 with a UACR >300 mg per g or with a urinary protein-to-creatinine ratio >500 mg per g . Biomaterials, including blood and urine, were collected at the baseline visit, processed and shipped frozen to a central biobank for storage at −80 °C . Details on the study design and participant characteristics have been published , . The GCKD study was registered in the national registry for clinical studies (DRKS 00003971) and approved by local ethics committees of the participating institutions . All participants provided written informed consent. Genomic DNA was extracted from whole blood and underwent paired-end 100-bp WES at Human Longevity, using the IDT xGen version 1 capture kit on the Illumina NovaSeq 6000 platform. More than 97% of consensus coding sequence (CCDS) release 22 (ref. ) had at least 10-fold coverage. The average coverage of the CCDS was 141-fold read depth. Exomes were processed from their unaligned FASTQ state in a custom-built cloud compute platform using the Illumina DRAGEN Bio-IT Platform Germline Pipeline version 3.0.7 at AstraZeneca’s Centre for Genomics Research, including alignment of reads to the GRCh38 reference genome ( https://ftp.ncbi.nlm.nih.gov/genomes/all/GCA/000/001/405/GCA_000001405.15_GRCh38/ ) and variant calling . Sample-level quality control included removal of samples from participants who withdrew consent, duplicated samples, those with an estimated VerifyBamID contamination level >4% , samples with inconsistency between reported and genetically predicted sex, samples not having chromosomes XX or XY, samples having <94.5% of CCDS release 22 bases covered with ≥10-fold coverage , related samples with kinship >0.884 (KING, kinship version 2.2.3) and samples with a missing call rate >0.03. Furthermore, only samples with available high-quality DNA microarray genotype data and without outlying values (>8 s.d.) along any of the first ten genetic principle components from a principal component analysis were kept, for a final sample size of 4,779 samples. Variant-level quality control was performed similar to that in ref. , excluding variants with coverage <10, heterozygous variants with a one-sided binomial exact test P value <1 × 10 −6 for Hardy–Weinberg equilibrium, variants with a genotype quality score <30, single-nucleotide variants with a Fisher’s strand bias score (FS) >60 and insertions and deletions with an FS >200, variants with a mapping quality score <40, those with a quality score <30, variants with a read position rank-sum score <−2, those with a mapping quality rank-sum score <−8, variants that did not pass the DRAGEN calling algorithm filters, heterozygous genotype called variants based on an alternative allele read ratio <0.2 or >0.8 and variants with a missing call rate >10% among all remaining samples. This resulted in 1,038,062 variants across the autosomes and the X chromosome. Variants from WES were annotated using the Variant Effect Predictor (VEP) version 101 (ref. ) with standard settings, including the canonical transcript, gene symbol and variant frequencies from gnomAD version 2.1 ( https://gnomad.broadinstitute.org/ ). VEP plugins were used to add the REVEL (version 2020-5) and CADD (version 3.0) scores and to downgrade LoF variants using LOFTEE (version 2020-8) . Furthermore, we added multiple in silico prediction scores using dbNSFP version 4.1a . For interpretation, genes were annotated for their potential function as enzymes using UniProt ( https://www.uniprot.org/ ) and as transporters using data from Gyimesi and Hediger . Metabolite levels were quantified from stored plasma and spot urine as published by Schlosser et al. . In brief, nontargeted mass spectrometry analysis was conducted at Metabolon. Metabolites were identified by automated comparison of the ion features in the experimental sample to a reference library of chemical standards. Known metabolites reported in this study were identified with the highest confidence level of identification of the Metabolomics Standards Initiative , , unless marked with an asterisk. Unnamed biochemicals of unknown structural identity were identified by virtue of their recurrent nature. For peak quantification, the area under the curve was used, followed by normalization to account for interday instrument variation. Data cleaning, quality control, filtering and normalization of quantified metabolites in plasma and urine in the GCKD study were performed using an in-house pipeline . Samples and metabolites were evaluated for duplicates; missing and outlying values and metabolites with low variance were excluded. Levels of urine metabolites were normalized using the probabilistic quotient derived from 309 endogenous metabolites with <1% missing values to account for differences in urine dilution. After removing metabolites with <300 individuals with WES data, the remaining 1,294 plasma and 1,396 urine metabolites (Supplementary Table ) were inverse normal transformed before gene-based aggregation testing. Therefore, effect sizes based on effects of aggregated rare variants on the semi-quantitative metabolite measurements have 1 s.d. as a unit. Serum and urine creatinine were measured using an IDMS-traceable enzymatic assay (Creatinine Plus, Roche). Serum and urine albumin levels were measured using the Tina-quant assay (Roche–Hitachi Diagnostics). GFR was estimated with the CKD-EPI formula from serum creatinine. UACR was calculated using urinary albumin and creatinine measurements. Full information on WES data, covariates and metabolites was available for 4,713 persons regarding plasma metabolites and for 4,619 persons regarding urine metabolites. Genetic principal components were derived based on principal component analysis on the basis of genotype data using flashpca . We performed burden tests to combine the effects of rare, putatively damaging variants within a gene on metabolite levels assuming a LoF mechanism that results in concordant effect directions on metabolite levels . The selection of high-quality QVs into masks based on their frequency and annotated properties is a state-of-the-art approach in variant aggregation studies . Annotations from VEP version 101 (ref. ) were used to select QVs within each gene for aggregation in burden tests. Because genetic architectures of damaging variants vary across genes, two complementary masks for the selection of QVs were defined. Both masks were restricted to contain only rare variants in canonical transcripts with MAF <1%. All variants that were predicted to be either high-confidence LoF variants or missense variants with a MetaSVM score >0 or in-frame nonsynonymous variants with a fathmm-XF-coding score >0.5 were aggregated into the first mask, termed LoF_mis. The second mask, termed HI_mis, contained all variants that were predicted either to have a high-impact consequence defined by VEP (transcript ablation, splice acceptor variant, splice donor variant, stop-gain variant, frameshift variant, start/stop lost variant, and transcript amplification) or to be missense variants with a REVEL score >0.5, a CADD PHRED score >20 or an M-CAP score >0.025. Only genes with an HGNC symbol that were not read-throughs and that contained more than three QVs in at least one of the masks were kept for testing, resulting in 16,525 analyzed genes. Burden tests were carried out as implemented in the seqMeta R package version 1.6.7 (ref. ), adjusting for age, sex, ln(eGFR) and the first three genetic principal components as well as serum albumin for plasma metabolites and ln(UACR) for urinary metabolites, respectively . Genotypes were coded as the number of copies of the rare allele (0, 1, 2) on the autosomes and also on the X chromosome for women. For men, genotypes in the non-pseudo-autosomal region of the X chromosome were coded as (0, 2). Statistical significance was defined as nominal significance corrected for the number of tested genes and principal components that explained more than 95% of the metabolites’ variance (0.05/16,525/600 = 5.04 × 10 −9 in plasma, 0.05/16,525/679 = 4.46 × 10 −9 in urine). For significant gene–metabolite associations, single-variant association tests between each QV in the respective mask and the corresponding metabolite levels were performed under additive modeling, adjusting for the same covariates using the seqMeta R package version 1.6.7 (ref. ). Sensitivity analyses that evaluated all significant gene–metabolite pairs with regard to additional gene-based tests as well as across strata of sex and kidney function are summarized in the and Supplementary Tables and . The investigation of the genetic architecture underlying gene–metabolite associations and the prioritization of QVs according to their contribution to the gene-based association signal were performed using the forward selection procedure from Bomba et al. . First, for each QV v , the P value P v is calculated by performing the burden test aggregating all QVs other than v . Second, for each QV v , the difference Δ v between P v and the total P value of the burden test including all QVs is calculated. Subsequently, QVs are ranked by the magnitude of Δ v . QVs not contributing to the gene signal or even having an opposite effect can provide a negative Δ v . Finally, burden tests are performed by adding the ranked QVs one after the other until the lowest P value is reached, starting with the greatest Δ v . This identified a set of QVs that contained only variants that contributed most to the gene–metabolite association signal (that is, led to a stronger association signal) and did not contain variants that introduced noise (that is, neutral variants or those with a small or even opposite effect on metabolite levels). The resulting set of selected variants that led to the lowest possible association P value was designated ‘driver variants’ for the respective gene–metabolite association. Driver variants within a gene might differ for different associated metabolites. SLC13A1 and SLC26A1 to musculoskeletal traits WES and biomedical data of the UKB were used to investigate allelic series of functional QVs in SLC13A1 and SLC26A1 with hypothesized related clinical traits and diseases. We focused on SLC13A1 driver variants with experimental validation or that likely result in a severe consequence (stop-gain, splicing) to select truly functional QVs. Among these, the stop-gain variant encoding p.Arg12*, for which a complete LoF has experimentally been validated , the stop-gain substitution p.Trp48*, for which associations with decreased serum sulfate levels and skeletal phenotypes were reported, and the missense variant encoding p.Arg272Cys, located in a splice region, were available in the UKB. For SLC26A1 , we selected driver QVs for which reduced sulfate transport activity had previously been shown , of which p.Leu384Pro, p.Ser358Leu and p.Thr185Met were available in the UKB. All 6 QVs passed the ‘90pct10dp’ QC filter, defined as at least 90% of all genotypes for a given variant, independent of variant allele zygosity, had a read depth of at least 10 ( https://biobank.ndph.ox.ac.uk/ukb/ukb/docs/UKB_WES_AnalysisBestPractices.pdf ). Analyses were performed on the UKB Research Analysis Platform. Participants with all ancestries were included into the analysis but excluding strongly related individuals, defined as those that were excluded from the kinship inference process and those with ten or more third-degree relatives. After individual-level filtering, 468,292 individuals remained for analyses. Of these, ten participants were homozygous for one of the six QVs and 7,280 persons were heterozygous for at least one of the QVs. For these homozygous or heterozygous persons, we determined age- and sex-specific z scores of their quantitative anthropometric measurements, enabling interpretation of their measurements compared with noncarriers of the same age and sex. Age- and sex-specific distributions were inverse normal transformed before calculating z scores. The association between each of the six functional QVs with medical diagnoses defined by International Classification of Diseases version 10 (ICD-10) codes based on UKB field 41202 (primary or main diagnosis codes across hospital inpatient records) was investigated. We selected musculoskeletal diseases (ICD-10 codes starting with ‘M’) and fractures and injuries (ICD-10 codes starting with ‘S’ and containing ‘fracture’, ‘dislocation’ or ‘sprain’ terms). To avoid unreliable estimates, traits were restricted to those with at least two rare variant carriers among both individuals with and without disease. The association was examined using Fisher’s exact test under dominant modeling and Firth regression under additive modeling (‘brglm2’ R package ). We included sex, age at recruitment, sex × age and the first 20 genetic principal components (UKB field 22009) as covariates in the regression model. The association with quantitative anthropometric traits was assessed after inverse normal transformation via linear regression, additive genotype modeling and adjusting for the same covariates. We performed gene-based tests for significantly associated metabolites available in the UKB to validate our findings using the same settings for analysis as those in our study. Because metabolite levels in the UKB were quantified by Nightingale Health’s metabolic biomarker platform focusing on lipids, only two (histidine and phenylalanine) of the 122 significantly associated plasma metabolites were available. Histidine and phenylalanine values (UKB data fields 23463 and 23468) were inverse normal transformed. Sample and variant QC was performed, and covariates were included as described in the previous paragraph. A total of 260,000 individuals were available for analysis. Association analysis for the two identified gene–metabolite pairs, histidine and HAL as well as phenylalanine and PAH , was performed based on burden tests as implemented in REGENIE version 3.3 in two steps using the HI_mis mask, selecting only QVs that were present in the GCKD study to ensure reproducibility of rare variant effects between the studies. The sex-specific and organ-resolved WBM covers 13,543 unique metabolic reactions and 4,140 unique metabolites based on the generic genome-scale reconstruction of human metabolism, Recon3D , and adequate physiological and coupling constraints , . Of all observed significant gene–metabolite pairs from the GCKD study, 51 genes and 69 metabolites could be mapped onto Recon3D. For 36 of 51 genes, their associated metabolites could be mapped, resulting in 69 unique gene–metabolite pairs. To investigate perturbations in gene G , we first identified all reactions [12pt]{minimal} $${R}_{G}=\{{r}_{{G}_{1}}, ,{r}_{{G}_{n}}\}$$ R G = { r G 1 , … , r G n } of the corresponding encoded enzymes or transporters in the WBM . We included those genes (27 of 36) in the generation of virtual IEMs that were exclusively causal for a non-empty set of reactions (that is, for a gene G , associated with reactions [12pt]{minimal} $${R}_{G}=\{{r}_{{G}_{1}}, ,{r}_{{G}_{n}}\}$$ R G = { r G 1 , … , r G n } , there did not exist a gene H that was associated with any reaction of R G ) and metabolites with urinary excretion reactions, leading to the exclusion of SLC22A7 and SULT2A1 . Knockout simulations were based on maximizing the flux of the excretion or demand reaction of the metabolite of interest M under different conditions in a steady state setting ( Sv = 0 ), where S is the stoichiometric matrix (rows, metabolites; columns, reactions), and v is the flux vector through each reaction, adhering to specific constraints ( v l ≤ v ≤ v u ) , : 1 [12pt]{minimal} $$_{{}}{{}}^{T}{},\\ {}\,{}\,{}=,\\ {{}}_{{}} {} {{}}_{{}}.$$ max v c T v , subject to Sv = 0 , v l ≤ v ≤ v u . For simulating a wild-type model for gene G , we solved the linear programming (LP) problem stated in equation , choosing the linear objective as the sum of all corresponding fluxes of reactions in R G : 2 [12pt]{minimal} $$_{G}:=\, _{k=1}^{n}{v}_{{G}_{k}},\\ {}\,{}\,{}=,\\ {{}}_{{}} {} {{}}_{{}}.$$ S G : = max ∑ k = 1 n v G k , subject to Sv = 0 , v l ≤ v ≤ v u . First, we checked whether S G > 10 −6 , a criterion implemented in the function checkIEM_WBM of the PSCM toolbox for deciding whether the corresponding reactions could carry any flux , . All reactions except the TMLHE -associated reactions passed this criterion. Next, we maximized the flux of two key reactions: the urine excretion reaction (for example, EX M [ u ]) and the created unbounded demand reaction (for example, DM M [ bc ]), designed to reflect accumulation in the blood compartment. First, we unbounded the upper bound of the urine excretion reaction. Next, we maximized the corresponding fluxes of metabolite M as the LP problem stated in equation under the additional constraint that [12pt]{minimal} $${ }_{k=1}^{n}{v}_{{G}_{k}}={S}_{G}$$ ∑ k = 1 n v G k = S G , providing the maximal urine excretion and the maximal flux into blood given the constraint setting. Finally, to simulate the complete LoF, we blocked all reactions in all organs catalyzed by gene G by setting [12pt]{minimal} $${v}_{{G}_{1}}= ={v}_{{G}_{n}}=0.$$ v G 1 = … = v G n = 0 . We derived maximum fluxes as in the wild-type model. Subsequently, we tested whether the knockout resulted in an increase, a decrease or no change in EX M [ u ] and DM M [ bc ] for each mapped gene–metabolite pair that was significant in the GCKD cohort. From the initial 36 genes mapped onto Recon3D, 24 genes and their mapped metabolites fulfilled all criteria (exclusively causal, reactions of the genes carry flux, urinary excretion reaction present), leading to 60 modeled gene–metabolite pairs. After curation of the male and female models, 26 genes ( TMLHE and KYAT1 added) and 67 gene–metabolite pairs could be computed . LP simulations were carried out in Windows 10 using MATLAB 2021a (MathWorks) as the simulation environment, ILOG CPLEX version 12.9 (IBM) as the LP solver, the COBRA Toolbox version 3.4 (ref. ) and the PSCM toolbox . Microbiome-personalized WBMs were generated by creating community models based on the genome-scale reconstructions of microbes in the AGORA1 resource , . Models have been shown to accurately reflect aspects of the fecal host metabolome , . Briefly, from microbe identification and relative abundance data of a metagenomic sample, genome-scale reconstructions of the identified microbes are joined together and connected via a lumen compartment, where they can exchange metabolites to form a microbial community , . Each microbial community model is then integrated in the WBM by connecting the microbiota lumen compartment to the large intestinal lumen of the WBM. Microbial community models ( n = 616) were based on publicly available metagenomics data from Yachida et al. and then embedded into the male WBM to form 616 personalized WBMs. While maintaining the same conditions as outlined in equation , rather than maximizing a linear objective, we minimized a quadratic objective for each personalized WBM: 3 [12pt]{minimal} $$_{{}}}}^{T}{},\\ {}\,{}\,{}=,\\ {{}}_{{}} {} {{}}_{{}}.$$ min v 1 2 v T Qv , subject to Sv = 0 , v l ≤ v ≤ v u . Here, Q is a diagonal matrix, with 10 −6 on its diagonal, a value recommended in the COBRA Toolbox . Because of convexity attributes, equation allows for calculation of a unique flux distribution. For each solution v * , we obtained the corresponding urine excretion reactions of the measured and mapped metabolites. For knockout simulations, the associated reactions of gene G were set to zero ( [12pt]{minimal} $${v}_{{G}_{1}}= ={v}_{{G}_{n}}=0$$ v G 1 = … = v G n = 0 ). Then, equation was solved if possible. An optimal quadratic programming (QP) solution could be computed for 582 wild-type models, 590 KYNU -knockout WBMs and 588 PAH -knockout WBMs, which led to 569 paired QP– KYNU solutions and 567 paired QP– PAH solutions. We analyzed urine secretion fluxes for 257 metabolites covered in the GCKD urine metabolome data and 272 metabolites covered in the GCKD plasma metabolome data that had non-zero flux values. For KYNU , the urine compartment was analyzed, as biomarker quantification for the corresponding IEM is done in urine. Analogously for PAH , the blood metabolome data were analyzed as the clinically relevant compartment. The QP simulations were carried out using the high-performance computing facility, called the Brain-Cluster, of the University of Greifs-wald, employing MATLAB 2019b (MathWorks), ILOG CPLEX version 12.10 (IBM) as the quadratic programming solver and the COBRA Toolbox version 3.4 (ref. ). The Fisher–Freeman–Halton test was used to determine significance when comparing the in vivo and in silico signs from LP modeling. Statistical analysis of the QP solutions was conducted based on the paired wild-type and knockout fluxes via fixed-effect linear regression for panel data . We used ln(urine secretion flux) as the response variable, the knockout status as the sole predictor (wild type versus knockout) and the personalized microbiome as a fixed effect. Significance thresholds were set to 0.05/257 ( KYNU ) and 0.05/272 ( PAH ). Importantly, the entire variance in the regression models had two sources: (1) the knockout and (2) the microbiome personalization. Significance testing of the in silico regression coefficient of the knockout variable therefore delivers a test of whether the knockout explains substantial amounts of variance in comparison to the variance induced by randomly sampled microbiome communities. The in silico regression coefficients were then correlated with the burden-derived observed regression coefficients of gene–metabolite associations from the GCKD study, and significance was determined using the standard test for Pearson correlations. Generation of cells Human SLC6A19 ( NM_001003841.3 → NP_001003841.1 ) and human CLTRN ( TMEM27 ) ( NM_020665.6 → NP_065716.1 ) cDNA was synthesized at Life Technologies Gene Art and cloned into a T-REx inducible expression vector. Both vectors were transfected into CHO T-REx cells and selected with neomycin and hygromycin. Mock cells were made by transfecting with only the TMEM27 vector and selection using hygromycin. Stable pools were then selected by measuring doxycycline-inducible uptake of neutral amino acids (for example, isoleucine) by measuring changes in membrane potential using the FLIPR Tetra system. The selected stable cell pools were then serially diluted to generate single-cell clones, which were subsequently selected based on function using the FLIPR assay and hSLC6A19 and hTMEM27 expression using qPCR. FLIPR membrane potential assay CHO T-REx cells stably expressing doxycycline-inducible hSLC6A19 and hTMEM27 were seeded in a 384-well plate and incubated overnight with 1 µg ml −1 doxycycline. The next day, cells were washed and then incubated with Tyrode’s buffer (sodium free) with FMP-Blue-Dye, which is a membrane potential dye, for 60 min. The cells were then incubated with standard Tyrode’s buffer (130 mM NaCl) with and without cinromide for 10 min before incubation with standard Tyrode’s buffer alone or with eight increasing concentrations of methionine sulfone and isoleucine, both with maximum concentrations of 30 mM. The FLIPR Tetra system was used to read FMP-Blue-Dye fluorescence as a measurement of membrane depolarization as a result of substrate-driven electrogenic net influx of Na + . Data were analyzed and represented in two ways: (1) for data comparison with the mock cell line, transport activity was presented as fold over non-substrate-driven signal with the formula (fluorescence signal − median of fluorescence signal with no substrate)/(median of fluorescence signal with no substrate); and (2) for data comparison with cinromide, transport activity was presented as a percent of maximum substrate-driven fluorescence signal with the formula 100 × (fluorescence signal − median of fluorescence signal with no substrate)/(median of fluorescence signal with substrate). Human SLC6A19 ( NM_001003841.3 → NP_001003841.1 ) and human CLTRN ( TMEM27 ) ( NM_020665.6 → NP_065716.1 ) cDNA was synthesized at Life Technologies Gene Art and cloned into a T-REx inducible expression vector. Both vectors were transfected into CHO T-REx cells and selected with neomycin and hygromycin. Mock cells were made by transfecting with only the TMEM27 vector and selection using hygromycin. Stable pools were then selected by measuring doxycycline-inducible uptake of neutral amino acids (for example, isoleucine) by measuring changes in membrane potential using the FLIPR Tetra system. The selected stable cell pools were then serially diluted to generate single-cell clones, which were subsequently selected based on function using the FLIPR assay and hSLC6A19 and hTMEM27 expression using qPCR. CHO T-REx cells stably expressing doxycycline-inducible hSLC6A19 and hTMEM27 were seeded in a 384-well plate and incubated overnight with 1 µg ml −1 doxycycline. The next day, cells were washed and then incubated with Tyrode’s buffer (sodium free) with FMP-Blue-Dye, which is a membrane potential dye, for 60 min. The cells were then incubated with standard Tyrode’s buffer (130 mM NaCl) with and without cinromide for 10 min before incubation with standard Tyrode’s buffer alone or with eight increasing concentrations of methionine sulfone and isoleucine, both with maximum concentrations of 30 mM. The FLIPR Tetra system was used to read FMP-Blue-Dye fluorescence as a measurement of membrane depolarization as a result of substrate-driven electrogenic net influx of Na + . Data were analyzed and represented in two ways: (1) for data comparison with the mock cell line, transport activity was presented as fold over non-substrate-driven signal with the formula (fluorescence signal − median of fluorescence signal with no substrate)/(median of fluorescence signal with no substrate); and (2) for data comparison with cinromide, transport activity was presented as a percent of maximum substrate-driven fluorescence signal with the formula 100 × (fluorescence signal − median of fluorescence signal with no substrate)/(median of fluorescence signal with substrate). Further information on research design is available in the linked to this article. Any methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at 10.1038/s41588-024-01965-7. Supplementary Information Supplementary Methods, Results, Discussion, Note and Figs. 1–3 Reporting Summary Peer Review File Supplementary Tables 1–18 Supplementary Tables 1–18. Supplementary Data 1 Plasma and urine metabolite levels among carriers and noncarriers of QVs in significantly associated genes. Supplementary Data 2 Contribution of individual QVs to their gene-based association signal with plasma and urine metabolite levels.
Molecular-based precision oncology clinical decision making augmented by artificial intelligence
7f3ac31e-3df3-4589-b7a8-d1f24e2672d1
8786281
Internal Medicine[mh]
Clinical decision making is a contextual, continuous, and evolving process, where data are gathered, interpreted, and evaluated in order to select an evidence-based choice of action . It consists of three integrated stages: diagnosis, prognosis assessment and disease management. Molecular tests that check for certain changes at the genomic level that may cause or affect the chance of developing a specific disease or disorder have been routinely used to assist diagnosis and treatment of infectious diseases, inherited diseases and cancer. As the world's second largest cause of deaths, cancer refers to a group of genetic diseases that are caused by disruptive changes to the genes that control the way our cells function. The high throughput capacity of next-generation sequencing (NGS) technologies have transformed genomic testing and our understanding of cancer , making practicing precision oncology at a large scale a reality. The field of precision oncology is concerned with developing treatments targeting the molecular characteristic of an individual's tumor . There is a growing recognition that optimal disease management of patients depends on individualized cancer treatment guided by comprehensive molecular testing . A 2017 survey revealed that 75.6% of the 1281 United States oncologists surveyed utilized NGS tests to guide their treatment decisions in the 12-month period preceding the survey . As large panel genomic sequencing becomes routine in many disease settings, the challenge of translating the molecular sequencing reports to actionable clinical insights is not to be underestimated. Artificial Intelligence (AI) refers to a broad field in computer science that studies the theory, algorithms and architectures that enable machines to perform tasks that would otherwise require human intelligence. Machine learning (ML) is a subfield of AI that focuses on the development of computer systems that are able to learn and adapt without explicit instructions, but instead leverage examples to draw inferences and identify patterns computationally. Thanks to the advancement and decreasing costs of computer hardware, the past decade has seen exponential growth in the use and development of ML applications. Deep sequencing of exomes or even whole genomes has also been rapidly integrated into clinical practice, giving rise to well-curated databases such as The Cancer Genome Atlas (TCGA) which provides annotations of clinically relevant metadata which are suitable to serve as training examples in AI systems . In this review, we provide a general overview of the operational processes involved in using molecular-based results to guide treatment planning in precision oncology ( ) and discuss about how ML technologies can be applied to facilitate some of these processes in order to augment clinicians in their decision making. We focus on the state-of-the-art systems that have been explored in clinical settings, or those having high translational potential and providing publicly-accessible resources (we have verified all of the URLs shared in this review to ensure they are functional as of the time of writing). We also discuss some current limitations in applying AI in clinical processes and present some suggestions for future work to address these concerns. Not all mutations lead to cancer progression . From clinical perspective, it is now known that the efficacy of targeted therapies depends on the genetic alterations of individual patients . Understanding the functional and therapeutic significance of individual alterations is therefore critical to support clinicians’ decision making. Computational prediction of variants’ functional effect and pathogenicity Since it is cost prohibitive to functionally evaluate all known mutations using validated assay, it is routine to utilize benchmark in silico tools such as PolyPhen-2, SIFT, FASMIC , to provide computational prediction on the functional effects of variants of interest. PolyPhen-2 leverages a supervised learning method, called Naïve Bayes classifier, to train their model using sequence-based feature profile derived from two pairs of datasets that contain comprehensive repository of disease-causing mutations known to be either damaging or neutral (URL: http://genetics.bwh.harvard.edu/pph2/ ). While tools such as PolyPhen-2, SIFT, VEST and MutationAssessor predict individual mutation's pathogenicity on a population level , which may provide limited value to clinicians that are specialized to ascertain somatic mutations in cancer patients, several tools such as CHASM, CanDrA, fathmm and transFIC have been developed to identify cancer-specific driver mutations . Specifically, Mao and colleagues developed a meta predictor called CanDrA by leveraging predictions on 95 structural and evolutionary features from 10 existing functional prediction algorithms including CHASM, SIFT and MutationAssessor. Through the utilization of feature optimization and support vector machine, CanDrA was shown to combat the curse of dimensionality, an issue that plagued the oncology domain, better than its counterparts and yielded better overall performance on real-world data ( https://bioinformatics.mdanderson.org/public-software/candra/ ). Lawrence and colleagues observed that with cancer genome studies, as sample size increases, extensive false-positive findings ensue. They hypothesized that this was due to mutational heterogeneity and incorporated this aspect into their analysis. Their method, named MutSigCV, was applied to exome sequences of 3083 tumor-normal pairs and was found to eliminate most of the artefactual findings ( https://software.broadinstitute.org/cancer/cga/mutsig ) . Dietlein et al. provided another novel methodology that was built on the observation that mutations in certain contexts provide a signal in favor of driver genes thus by combining this nucleotide context feature with signals traditionally used for driver-gene identification, their method identified 460 driver genes related to 21 cancer-related pathways when being applied on whole-exome sequencing data from 11 873 tumor-normal pairs . To leverage the synergy created by utilizing multiple prediction tools, an ensemble strategy has been explored in some systems. Bailey et al. reported a PanCancer and PanSoftware analysis of data provided by all the projects in TCGA and 26 computational tools. The ensemble tool identified more than 3400 potential missense driver mutations supported by multiple lines of evidence . Using a training set of variants across multiple genes enables ML systems such as PolyPhen-2 to be applicable as a general model, however, for specific genes, assuming well-characterized data is available (such as BRCA1, BRCA2 ), gene-specific models likely outperform general models . Using results from validated functional assays, Hart et al. iteratively trained and evaluated hundreds of ML algorithms along with the associated hyperparameters, resulting in a new optimal BRCA-ML model which yielded a system that significantly outperformed existing approaches. More recently, Martinez-Jimenez et al. systematically combined several prediction tools that aim to identify signals of positive selection of cancer driver genes in tumorigenesis and created an Integrative OncoGenomics (IntOGen) pipeline ( https://www.intogen.org/search ). They applied their ensemble system to somatic mutations of more than 28 000 tumors across 66 cancer types and identified 568 cancer genes that are likely contributors of tumorigenesis . Literature mining of alteration actionability While computational prediction of functional effects of alterations offers some insights about variants of unknown significance (VUSs), different prediction tools may reach different conclusions and sometimes they may not align with experimental observations . Whenever available, publications reporting functionally characterized mutations are still seen as a higher level of evidence than in silico predictions. To achieve this goal at a large scale, a systematic literature mining process is expected. Furthermore, the actionability of an alteration is not only related to its functional but also therapeutic effect. Identifying evidence of therapeutic significance of mutations is largely dependent on reviewing the published biomedical literature. A combination of factors including the growing gene panel size, the enormous and rapidly expanding body of literature, and the labor-intensive nature of manual review makes utilization of informatics systems highly desirable. To computationally assist with an assessment of an alteration's actionability using literature mining, it is critical to accomplish two tasks: (1) named entity recognition (NER), which recognizes the key entities such as biomarkers (including genes, mutations), the effects of interest, as well as therapies and conditions (for therapeutic contexts); and (2) relationship extraction. Traditionally, NER tasks may benefit from dictionary and rule-based NLP systems, especially on specific domains where expert input is available for defining customized rules. But they may be difficult to extend to other types of entities; iterative updates of dictionaries and rules may also increase the maintenance cost. Machine learning technologies may enable the systems to adapt and evolve with little human intervention. A sequence labeling method called conditional random fields (CRFs) is considered the state-of-the-art ML algorithm for NER tasks and is leveraged by popular systems such as ABNER, BANNER where gold standard annotations of pre-defined features were used in training the model . More recently, deep learning methods have been explored to expand the ML models to consider non-linear features such as word embedding. Habibi et al. reported impressive performance of a generic model by combining long short-term memory (LSTM) network with traditional CRFs and leveraged word embeddings to help the deep neural network to capture semantic similarities among related terms (source code and trained models are available on https://github.com/glample/tagger ). In the context of alteration actionability assessment, simply recognizing the key entities is not sufficient and the ability to automatically extract biomarker-drug relationship is critical. Lee et al. developed an exceptional dictionary-based NER called Biomedical Entity Search Tool (BEST) that offers real-time retrieval of PubMed articles that mentioned the concepts in ten types of biomedical entities including mutations, genes, drugs, diseases, etc. Thanks to their creative design of the inverted indices which include entity-document pairs, BEST is able to deliver dynamic results to users with little response latency and also can accommodate conjunctive queries. In the next iteration of the system, Lee et al. leveraged the score produced by BEST as one of the features and applied deep learning technology coupled with word embedding features obtained from word2vec and achieved promising results in extracting mutation-gene-drug relationships from the PubMed literature . The baseline BEST tool is publicly available at http://best.korea.ac.kr/ and is updated on a regular basis (according to the creators). To facilitate a team of professional curators’ work in reviewing clinical evidence for precision oncology, Lever et al. have developed a text mining tool which prioritizes literature predicted to be highly relevant to their mission. Logistic regression classifier was trained by pre-labeled sentences with desired relationships . The data is made publicly available at http://bionlp.bcgsc.ca/civicmine/ . It is worth noting that the development team prioritized precision over recall in their parameter tuning process so while extracting some relationship (such as associated variant) yielded a recall of 0.794, the recall of extracting predictive evidence, a very important factor of alteration interpretation in precision oncology, had much room for improvement (0.141). In the commercial domain, the cognitive computing powerhouse IBM Watson developed a comprehensive literature mining pipeline called Watson for Genomics (WfG) that leveraged training data curated by subject matter experts to identify genomic-based actionability evidence . Compared with a traditional molecular tumor board, WfG was shown to identify new evidence not previously discovered and do so with much higher efficiency than the manual counterpart. Prioritization In the case when multiple actionable alterations are identified in a patient's molecular profile, how to prioritize is a practical concern for clinicians. The bioinformatics community has explored using supervised learning methods to identify driver mutations but cohort-level solutions often cannot sufficiently address the needs at single patient level. iCAGES leveraged curated drug association database to support a patient-level analysis on somatic mutations and copy-number data without the need for users to provide sophisticated configuration data as in tools such as OncoIMPACT and DriverNet . More recently, PANOPLY incorporated clinical features in addition to omics data and applied random forest analysis to identify prioritized treatment given a patient's clinical and molecular profile ( http://kalarikrlab.org/Software/Panoply.html ) and some anecdotal success was reported . Nulsen et al. developed a one-class support vector machine called sysSVM that was trained on TCGA data to create a pan-cancer detection tool for identifying driver genes at the granularity of single patients ( https://github.com/ciccalab/sysSVM2 ). Computational validation has shown promising results in terms of low false positive rate which is very essential for the clinical utilization but none of the applications has been formally evaluated in clinical contexts. Overall, extensive clinical validation is needed before such type of informatics systems can be integrated into clinical care workflow. Since it is cost prohibitive to functionally evaluate all known mutations using validated assay, it is routine to utilize benchmark in silico tools such as PolyPhen-2, SIFT, FASMIC , to provide computational prediction on the functional effects of variants of interest. PolyPhen-2 leverages a supervised learning method, called Naïve Bayes classifier, to train their model using sequence-based feature profile derived from two pairs of datasets that contain comprehensive repository of disease-causing mutations known to be either damaging or neutral (URL: http://genetics.bwh.harvard.edu/pph2/ ). While tools such as PolyPhen-2, SIFT, VEST and MutationAssessor predict individual mutation's pathogenicity on a population level , which may provide limited value to clinicians that are specialized to ascertain somatic mutations in cancer patients, several tools such as CHASM, CanDrA, fathmm and transFIC have been developed to identify cancer-specific driver mutations . Specifically, Mao and colleagues developed a meta predictor called CanDrA by leveraging predictions on 95 structural and evolutionary features from 10 existing functional prediction algorithms including CHASM, SIFT and MutationAssessor. Through the utilization of feature optimization and support vector machine, CanDrA was shown to combat the curse of dimensionality, an issue that plagued the oncology domain, better than its counterparts and yielded better overall performance on real-world data ( https://bioinformatics.mdanderson.org/public-software/candra/ ). Lawrence and colleagues observed that with cancer genome studies, as sample size increases, extensive false-positive findings ensue. They hypothesized that this was due to mutational heterogeneity and incorporated this aspect into their analysis. Their method, named MutSigCV, was applied to exome sequences of 3083 tumor-normal pairs and was found to eliminate most of the artefactual findings ( https://software.broadinstitute.org/cancer/cga/mutsig ) . Dietlein et al. provided another novel methodology that was built on the observation that mutations in certain contexts provide a signal in favor of driver genes thus by combining this nucleotide context feature with signals traditionally used for driver-gene identification, their method identified 460 driver genes related to 21 cancer-related pathways when being applied on whole-exome sequencing data from 11 873 tumor-normal pairs . To leverage the synergy created by utilizing multiple prediction tools, an ensemble strategy has been explored in some systems. Bailey et al. reported a PanCancer and PanSoftware analysis of data provided by all the projects in TCGA and 26 computational tools. The ensemble tool identified more than 3400 potential missense driver mutations supported by multiple lines of evidence . Using a training set of variants across multiple genes enables ML systems such as PolyPhen-2 to be applicable as a general model, however, for specific genes, assuming well-characterized data is available (such as BRCA1, BRCA2 ), gene-specific models likely outperform general models . Using results from validated functional assays, Hart et al. iteratively trained and evaluated hundreds of ML algorithms along with the associated hyperparameters, resulting in a new optimal BRCA-ML model which yielded a system that significantly outperformed existing approaches. More recently, Martinez-Jimenez et al. systematically combined several prediction tools that aim to identify signals of positive selection of cancer driver genes in tumorigenesis and created an Integrative OncoGenomics (IntOGen) pipeline ( https://www.intogen.org/search ). They applied their ensemble system to somatic mutations of more than 28 000 tumors across 66 cancer types and identified 568 cancer genes that are likely contributors of tumorigenesis . While computational prediction of functional effects of alterations offers some insights about variants of unknown significance (VUSs), different prediction tools may reach different conclusions and sometimes they may not align with experimental observations . Whenever available, publications reporting functionally characterized mutations are still seen as a higher level of evidence than in silico predictions. To achieve this goal at a large scale, a systematic literature mining process is expected. Furthermore, the actionability of an alteration is not only related to its functional but also therapeutic effect. Identifying evidence of therapeutic significance of mutations is largely dependent on reviewing the published biomedical literature. A combination of factors including the growing gene panel size, the enormous and rapidly expanding body of literature, and the labor-intensive nature of manual review makes utilization of informatics systems highly desirable. To computationally assist with an assessment of an alteration's actionability using literature mining, it is critical to accomplish two tasks: (1) named entity recognition (NER), which recognizes the key entities such as biomarkers (including genes, mutations), the effects of interest, as well as therapies and conditions (for therapeutic contexts); and (2) relationship extraction. Traditionally, NER tasks may benefit from dictionary and rule-based NLP systems, especially on specific domains where expert input is available for defining customized rules. But they may be difficult to extend to other types of entities; iterative updates of dictionaries and rules may also increase the maintenance cost. Machine learning technologies may enable the systems to adapt and evolve with little human intervention. A sequence labeling method called conditional random fields (CRFs) is considered the state-of-the-art ML algorithm for NER tasks and is leveraged by popular systems such as ABNER, BANNER where gold standard annotations of pre-defined features were used in training the model . More recently, deep learning methods have been explored to expand the ML models to consider non-linear features such as word embedding. Habibi et al. reported impressive performance of a generic model by combining long short-term memory (LSTM) network with traditional CRFs and leveraged word embeddings to help the deep neural network to capture semantic similarities among related terms (source code and trained models are available on https://github.com/glample/tagger ). In the context of alteration actionability assessment, simply recognizing the key entities is not sufficient and the ability to automatically extract biomarker-drug relationship is critical. Lee et al. developed an exceptional dictionary-based NER called Biomedical Entity Search Tool (BEST) that offers real-time retrieval of PubMed articles that mentioned the concepts in ten types of biomedical entities including mutations, genes, drugs, diseases, etc. Thanks to their creative design of the inverted indices which include entity-document pairs, BEST is able to deliver dynamic results to users with little response latency and also can accommodate conjunctive queries. In the next iteration of the system, Lee et al. leveraged the score produced by BEST as one of the features and applied deep learning technology coupled with word embedding features obtained from word2vec and achieved promising results in extracting mutation-gene-drug relationships from the PubMed literature . The baseline BEST tool is publicly available at http://best.korea.ac.kr/ and is updated on a regular basis (according to the creators). To facilitate a team of professional curators’ work in reviewing clinical evidence for precision oncology, Lever et al. have developed a text mining tool which prioritizes literature predicted to be highly relevant to their mission. Logistic regression classifier was trained by pre-labeled sentences with desired relationships . The data is made publicly available at http://bionlp.bcgsc.ca/civicmine/ . It is worth noting that the development team prioritized precision over recall in their parameter tuning process so while extracting some relationship (such as associated variant) yielded a recall of 0.794, the recall of extracting predictive evidence, a very important factor of alteration interpretation in precision oncology, had much room for improvement (0.141). In the commercial domain, the cognitive computing powerhouse IBM Watson developed a comprehensive literature mining pipeline called Watson for Genomics (WfG) that leveraged training data curated by subject matter experts to identify genomic-based actionability evidence . Compared with a traditional molecular tumor board, WfG was shown to identify new evidence not previously discovered and do so with much higher efficiency than the manual counterpart. In the case when multiple actionable alterations are identified in a patient's molecular profile, how to prioritize is a practical concern for clinicians. The bioinformatics community has explored using supervised learning methods to identify driver mutations but cohort-level solutions often cannot sufficiently address the needs at single patient level. iCAGES leveraged curated drug association database to support a patient-level analysis on somatic mutations and copy-number data without the need for users to provide sophisticated configuration data as in tools such as OncoIMPACT and DriverNet . More recently, PANOPLY incorporated clinical features in addition to omics data and applied random forest analysis to identify prioritized treatment given a patient's clinical and molecular profile ( http://kalarikrlab.org/Software/Panoply.html ) and some anecdotal success was reported . Nulsen et al. developed a one-class support vector machine called sysSVM that was trained on TCGA data to create a pan-cancer detection tool for identifying driver genes at the granularity of single patients ( https://github.com/ciccalab/sysSVM2 ). Computational validation has shown promising results in terms of low false positive rate which is very essential for the clinical utilization but none of the applications has been formally evaluated in clinical contexts. Overall, extensive clinical validation is needed before such type of informatics systems can be integrated into clinical care workflow. After identifying mutations in the patient molecular profile that are actionable (if any), the next operational task for treatment planning is to find matching therapeutics that are clinically available, i.e, approved by Food and Drug Administration (FDA), recommended by professional guidelines or under clinical development via clinical trials. In terms of data access, FDA provides an open-access database that catalogs all pharmaceutical agents which gained the approval from this agency . National Comprehensive Cancer Network (NCCN), a national leader in professional guidelines for oncologists, offers subscribers access to manually curated structured data about drugs that are referenced in their clinical guidelines, along with the corresponding biomarkers (NCCN Drugs and Biologics Compendium and Biomarker Compendium) . Therefore, there is little ambiguity in data extraction from these high-level data sources and most commercial vendors include this information in their molecular sequencing reports. However, while seventy or so molecular-matched therapies have gained FDA approval, many more are still in clinical development phase and thus are only clinically available in experimental settings as part of an ongoing clinical trial. As of the time of writing, there are over 21 000 open oncology clinical trials that are listed on ClinicalTrials.gov, the world's largest clinical trial databank . As comprehensive as ClinicalTrials.gov's coverage is, much pertinent information such as diseases, drugs, eligibility criteria is still largely captured in unstructured format . Especially for precision oncology trials which aim to assess genomically matched therapies, there are many unique characteristics and nuances in the eligibility criteria that need to be accommodated. Dedicated frameworks and workflows have been proposed in the literature to help facilitate the curation of clinical trial knowledgebase which in turn is used for patient matching but most of the curation tasks remain manual thus scalability is a practical concern . Being able to comprehensively characterize a patient's clinical and molecular profile for accurate trial matching is also often a manual process that is laborious. Artificial intelligence techniques have been actively explored in clinical trial matching and patient pre-screening. Generally speaking, these systems apply a combination of NLP and ML techniques to extract pertinent demographic, clinical and molecular information from patient medical records, clinical trial documents, or a combination of both. Tools like Antidote, Synergy are patient-centric while Deep6.ai, Mendel.ai and Watson for Clinical Trial Matching (WCTM) are more clinician-oriented . Formal evaluation of these commercial systems is rare but there are some exceptions. Mendel.ai published a retrospective study using their AI-empowered tool to facilitate clinical trial matching and reported 24%–50% increase over standard practices in terms of number of patients that were correctly identified as eligible in two out of three trials but failed to identify eligible patients for a trial that was closed due to lack of accrual . Their system combined ML and NLP techniques in tasks such as text recognition and clinical language understanding. A central knowledge base stores all knowledge extracted from real world data regarding patients’ profiles. WCTM published several studies where cognitive computing was used in real clinical settings, where WCTM was found to outperform manual counterparts in accuracy and yield significant increase in efficiency [ , , ]. WCTM used NLP to process unstructured data from patient's electronic medical records (whenever available) and further leveraged its ML component to populate patient model with pertinent disease related attributes. A similar process was utilized in intaking the clinical trial information using lessons learnt from several rounds of trial ingestion in the pilot phase. In academic settings, utilizing NLP in conjunction with ML to automate certain processes that facilitate clinical trial matching is also extensively studied. On one hand, some efforts are related to automatically convert unstructured text describing clinical trial eligibility criteria into structured conditions that are query-able . Notably, Liu et al. have developed a tool called DQueST which creates dynamic questionnaire for clinical trial searches. They developed an integrated NLP pipeline including negation detection such as NegEx and entity recognition module that applied CRFs as the ML model in their NLP module to identify clinical entity, attribute recognitions and domains leveraging annotated corpus . Miotto and Weng developed a novel solution which used a case-based reasoning method to establish a ‘target patient profile' using electronic records data from patients’ who have already enrolled in a given clinical trial and to compare new patients’ profiles with the said ‘target profile' in order to computationally predict their eligibility. While the real-life application of this approach remains to be seen, the creative use of vector-based ML method bypasses the need to literally process clinical trial documents and may inspire other related applications. With the rapid development in computer hardware and software, utilizing artificial intelligence to provide near real-time assistance to clinicians in their decision making has become a reachable goal from a purely technical sense. This review discussed some representative systems that have either been explored in some clinical settings or exhibited high translational potential. While the progress is encouraging, many challenges remain that limit a wider utilization of AI systems in clinical decision making. Here we outline a few examples of such challenges and some potential mitigation tactics. Data availability The performance of a data-driven model is directly correlated with the quality and volume of the data it analyzes. While technologies help generate a myriad of data, data do not equate knowledge. Supervised learning methods, which constitute the majority of the AI methods explored today, specifically require pre-labeled data for training their models. While publicly-accessible datasets exist, most of the granular clinical decision making processes warrant more domain-specific gold standards. Manually curating these gold standards for specific operational needs is not a trivial process. Efforts for making more of these curated datasets publicly available will help the community to grow. Validation While all studies mentioned in this review were evaluated, usually via some retrospective studies, more extensive validation is often needed before they can be embraced by the broad clinical community. Randomized trials provide more systematic evaluation and ultimate proof of clinical utility Real-world validation that applies the AI systems in actual clinical care process is also important to assess the real-world benefits and limitations of such systems. Stakeholder acceptance To broaden the impact of AI in medicine, it is critical that the development of these methods is done with the human experts, who are the stakeholders, in the loop and with the intention of augmenting instead of replacing human intelligence. Some basic literacy of ML methods is critical for the clinicians and decision makers to arrive at a critical yet reasonable assessment of the technologies involved. It is important to acknowledge that there is no universal solution and the choice and design of technologies are usually context-dependent. Generalizability, explainability, resource overhead and cost-reduction potential are some examples of important factors for stakeholders to consider in addition to the raw performance reported on some validation experiments. Interested readers can review Faes et al. work which outlined a clinician's guide to critically assess ML studies for more details. Regulatory development Due to the ethical and safety impact of all clinical decision making processes, it is important for any technology maker to consider the regulatory component early in the development phase. While the related regulatory processes remain largely elusive, some recent clinical trial guidelines for protocols involving AI systems such a SPIRIT-AI are a positive step towards the right direction . The performance of a data-driven model is directly correlated with the quality and volume of the data it analyzes. While technologies help generate a myriad of data, data do not equate knowledge. Supervised learning methods, which constitute the majority of the AI methods explored today, specifically require pre-labeled data for training their models. While publicly-accessible datasets exist, most of the granular clinical decision making processes warrant more domain-specific gold standards. Manually curating these gold standards for specific operational needs is not a trivial process. Efforts for making more of these curated datasets publicly available will help the community to grow. While all studies mentioned in this review were evaluated, usually via some retrospective studies, more extensive validation is often needed before they can be embraced by the broad clinical community. Randomized trials provide more systematic evaluation and ultimate proof of clinical utility Real-world validation that applies the AI systems in actual clinical care process is also important to assess the real-world benefits and limitations of such systems. To broaden the impact of AI in medicine, it is critical that the development of these methods is done with the human experts, who are the stakeholders, in the loop and with the intention of augmenting instead of replacing human intelligence. Some basic literacy of ML methods is critical for the clinicians and decision makers to arrive at a critical yet reasonable assessment of the technologies involved. It is important to acknowledge that there is no universal solution and the choice and design of technologies are usually context-dependent. Generalizability, explainability, resource overhead and cost-reduction potential are some examples of important factors for stakeholders to consider in addition to the raw performance reported on some validation experiments. Interested readers can review Faes et al. work which outlined a clinician's guide to critically assess ML studies for more details. Due to the ethical and safety impact of all clinical decision making processes, it is important for any technology maker to consider the regulatory component early in the development phase. While the related regulatory processes remain largely elusive, some recent clinical trial guidelines for protocols involving AI systems such a SPIRIT-AI are a positive step towards the right direction . Next-generation sequencing technologies transformed our understanding of cancer and are actively being utilized in many disease settings to guide treatment planning for the purpose of optimizing outcomes. The past decade has seen exponential growth in the development of artificial intelligence (AI), machine learning (ML) applications which have the potential to augment human intelligence in a scalable fashion. This review discussed some state of the art systems that employed ML technologies to facilitate some processes involved in clinicians’ treatment planning using molecular data. Some challenges of AI applications in this domain are also discussed.
Neuronavigation-guided Judet screw technique for C2 pedicle fractures: how I do it
b369deea-703a-4c5e-aa7d-e5aa88aeb4e6
11903534
Musculoskeletal System[mh]
Traumatic spondylolisthesis of the axis, commonly referred to as a Hangman’s fracture, represents one of the most frequent fractures of the C2 vertebra, second only to fractures of the odontoid process. This injury involves a bilateral fracture of the pars interarticularis of C2, leading to instability between the vertebral body and the posterior elements of the axis. Bilateral pedicle fractures of C2 are classified as "atypical" fractures, with only a limited number of cases reported in the literature. For unstable fractures, particularly in cases involving significant vertebral displacement or risk of spinal cord injury, surgical spine fixation is often necessary. Among surgical options, posterior transpedicular screw fixation offers direct stabilization of the C2 vertebra. Originally described by Leconte and later refined by Judet, the “Judet technique” involves placement of cancellous lag screws through the C2 pedicles. [ , , ] This technique is highly effective in selected cases, with minimal soft tissue damage, offering a minimally invasive approach, which preserves motion and minimizes complications. . We present our "How I do it" approach, utilizing neuronavigation to enhance the precision of C2 pedicle screw placement and reduce the risk of neurological and vascular complications. This advanced technology allows for more accurate identification of anatomical landmarks, ensuring safer and effective fixation in bilateral (Fig. ) and monolateral (Figs. and ) C2 pedicle fractures, as Hangman’s fractures and also in complex C2 fractures (combination with odontoid process fracture). In C2 pedicle screw fixation, exposing the screw entry points typically follows the identification of the C2 lamina and articular process. The entry point is located 2 mm lateral to the pars midpoint along the cranial edge of C2 lamina. After identifying this point, surgeons try to guarantee trajectory accuracy by visualizing the medial and superior borders of the pedicle. One of the most critical anatomical considerations is the relationship between the vertebral artery (VA) and the C2 pedicle. The VA passes through the transverse foramen and along the vertebral artery groove, lateral to the pedicle. Anatomical variations, particularly high-riding vertebral arteries, significantly increase the risk of injury during screw placement, with reports of injury rates around 8.2%. Thorough knowledge of these variations is crucial for avoiding complications. Preoperative imaging evaluation provides safer screw placement, especially in complex cases. Neuronavigation is highly valuable in this context as it enhances the surgeon’s ability to precisely identify both the anatomical landmarks and the entry point and to guide screw trajectory. The patient is positioned prone, with a radiolucent 3-pin head-holder, in the "military tuck" position (Fig. a). Anti-Trendelenburg positioning is mandatory to facilitate surgical exposure and reduce venous bleeding. Initially, 2D fluoroscopy is performed to verify the alignment and positioning of the bony fragments (e.g., also in cases of concurrent odontoid process fracture). The surgical field is prepped and draped. A small, midline, skin incision is made at the level of C2 spinous process (mini-invasive/mini-open). The cervical fascia is incised and the spinous process, laminae, and articular processes of C2 are skeletonized using monopolar (Fig. b) Once bony exposure is complete, the navigation reference frame is securely attached to the C2 spinous process (Fig. c). The LoopX system (Brainlab, Munich, Germany) or the BodyTom Computed Tomography (CT) scanner (Neurologica, Samsung, US) are used to obtain an initial CT scan of the cervical spine, enabling automatic registration with the Brainlab navigation systems (Fig. d). Validation of all trackable instruments is mandatory before surgery begins. Navigation accuracy is verified using clear anatomical landmarks. The virtual screw trajectory is planned on the navigation station using the CT scan acquired earlier, with virtual 3D screws of the desired dimensions (length and diameter) (Fig. e, f). Trajectory is confirmed using the navigation pointer, as well as adequacy of bony exposure. A navigated tubular guide is placed and held manually by a surgical assistant. Through this working channel, the drill is used to prepare the screw trajectory inside the fractured C2 pedicle and K-wires are then inserted. (Fig. a, b). Drilling must pass through the pars and into the fractured pedicle stumps. A navigated screw driver is used to insert the screw over the guidewire, with obvious tactile feedback indicating the crossing of the fracture line (Fig. d, e). Finally, a 3D-reformatted CT scan with the LoopX or BodyTom system is performed to confirm correct positioning of the screws and reduction of the fracture line (Fig. c, f). The navigation reference is removed and the cervical fascia and skin incisions are closed with running sutures and metal clips, respectively. Minimally invasive/mini-open neuronavigation-guided posterior C2 transpedicular “Judet screw fixation” is a safe and effective technique used to manage C2 instability. This approach reduces bleeding and muscular damage while providing lower risks of vertebral artery injury and increased accuracy compared to traditional C1-C2 fixation. Additionally, by avoiding C1-C2 fixation it also preserves rotational head motion, offering a significant functional advantage. Yet, by minimizing the extent of cervical spine exposure, the proposed technique also helps to decrease postoperative pain and to accelerate patient recovery. The main limitation of this technique is related to the need for spinal navigation technology, which, although increasingly available in neurosurgical operating rooms, is not yet universally spread or accessible. Spinal neuronavigation is also useful to assist less experienced surgeons in accurately placing screws by enabling meticulous preoperative planning and guiding them through anatomical trajectories. Compared to percutaneous, navigated, tubular approaches with the reference frame attached to the head holder, this technique requires a slightly longer skin incision (mini-open) with moderately greater muscle exposure. However, it offers the advantage of direct anatomical control of both entry points and screw trajectory through the bony stumps of the fractured C2 pedicle, providing a higher degree of surgical precision. Reliability of neuronavigation should be continuously verified throughout the surgical procedure, step by step, in order to confirm technical reliability and real correspondence between intraoperative CT-guided navigation images and exposed bony anatomical structures. 3D-reformatted CT scan allows accurate monitoring of screw placement. A thorough knowledge of vascular anatomy, as well as of traditional surgical techniques, entry points and trajectories for C2 pedicle screw fixation remains essential. Patients should be counseled about potential risks, including vertebral artery injury with neurological deterioration, infection, hematoma or misplaced screw insertion. Additionally, they should be informed about the possibility of pseudoarthrosis. C2 pedicle fractures can occur as atypical Hangman’s fractures or as part of complex fractures, also involving the odontoid process. The Judet technique involves placing cancellous lag screws through the C2 pedicles, offering direct C2 pedicle reconstruction and fixation, avoiding C1-C2 fixation with consequent loss of neck motion. Minimally invasive/mini open neuronavigation-guided posterior C2 pedicle screw fixation is a safe and effective technique for managing atlanto-axial instability. Anatomical variations of the vertebral artery must be ruled out preoperatively (CT-angiography) to avoid vascular complications. The Judet technique preserves rotational head movement, a key functional advantage over C1-C2 fixation. It enables faster recovery and shorter hospital stays, as it minimizes postoperative discomfort and rehabilitation time compared to C1-C2 fixation. Expertise in traditional C2 pedicle screw placement remains essential. Verification of navigation accuracy, using known bony landmarks and intraoperative 3D-reformatted CT imaging, is crucial to provide safety and screw placement accuracy. Patients should be informed about surgical risks, including artery injury, infection, and pseudoarthrosis. The main limitation seems to be related to availability of spinal navigation technology, which is not yet universally accessible. Below is the link to the electronic supplementary material. ESM 1 (MP4 172 MB)
Insights into artificial intelligence in myopia management: from a data perspective
a5fd25c1-7af6-47ad-8076-25fc471c0a3d
10212230
Ophthalmology[mh]
Myopia, which is defined as a spherical equivalent (SE) ≤  − 0.5 diopters (D), is a substantial global health issue. The global prevalence of myopia is estimated to be 49.8% (4.758 billion) of the world’s population, of which 9.8% (938 million) will suffer from high myopia by 2050 . In some Asian countries, more than 80% of high school students are myopic , and a greater proportion of young individuals develop high myopia (spherical equivalent (SE) ≤  − 6.0 diopters) , which further results in a higher risk of developing visually impairing and blinding complications . Although the causes of this pandemic remain unknown, strategies for coping with the myopia pandemic such as early identification, regular follow-up, and timely intervention of high-risk groups of myopia, are essential and are gaining more social attention . In recent years, artificial intelligence has been advancing at an unprecedented rate, showing great potential for the automated analysis of medical information and images. In the field of ophthalmology, due to the wide application of various imaging technologies in eye diseases, many studies have applied AI methods to different ophthalmology diseases, such as diabetic retinopathy (DR) , age-related macular degeneration (AMD) , cataract , dry eye syndrome (DES) , and glaucoma . For myopia, research efforts are still relatively insufficient compared to other subspecialties, even though AI has shown the potential to address urgent needs in the field of myopia. The clinical tasks of managing myopia include early screening, risk stratification, progression prediction, timely and individualized intervention, and ongoing management . Relevant data modalities produced during the process can be classified into two categories: clinical data and imaging data. As a concept in AI, machine learning (ML) is deeply entwined with statistics and is powerful for working with numerical or categorical data . Commonly used ML techniques in myopia include support vector machine (SVM), linear regression, random forest (RF), naive Bayes, k-nearest neighbor (KNN), and extreme gradient boosting (XGBoost) . As a subset of ML, deep learning (DL) has performed well in many image-based applications, such as object recognition and semantic segmentation . Convolutional neural networks (CNNs) are the foundation of image-driven applications in myopia and the use of recurrent neural networks (RNNs) is still at an early stage. The abundant datasets with adjunctive AI analysis have led to improvements in myopia management. As an emerging research field, there are currently only few reviews summarizing the application scenarios of AI in myopia , and none has focused on the data modalities available and the AI methods appropriate for each type of data, which is insufficient as data continue to grow in variety and quantity. Therefore, in this review, we examine how AI methods have been applied in analyzing different data modalities generated from clinical practice in myopia. We conducted a comprehensive literature review using two databases (i.e., PubMed and IEEE Xplore) in August 2022 and March 2023. Our search terms included a combination of relevant keywords, such as “myopia” and “artificial intelligence” and Boolean operators to ensure a comprehensive search. We also reviewed the reference lists of relevant articles to identify additional studies that may have been missed in our initial search. Our review is focused on the use of AI in the risk identification, screening, detection, classification, and treatment of myopia. Accordingly, we considered research articles that utilized AI for these purposes to be appropriate for inclusion in our review and have incorporated relevant studies in this article. When facing a patient with myopia in the clinic, clinician considerations usually follow the sequence of risk factor identification, the examination process, selection of treatment strategies, and ongoing management , as shown in Fig. . Myopia has been traditionally viewed as a consequence of the sophisticated interaction of lifestyle, genetics, and environmental factors. Therefore, detailed history taking is routinely conducted at the very beginning, and risk factors for a given individual are identified. Then, simple clinical tests such as cycloplegic and/or noncycloplegic refraction, best-corrected visual acuity, binocular vision and accommodative tests, anterior eye health evaluation, and corneal topography are taken for all visits. Measurement of axial length is optional, and currently, there is no established standard for normal or accelerated axial elongation. For patients who need further examination, especially those with a high degree of refraction, fundus imaging and examination are performed if indicated. After all the examinations, for patients who may result in low vision and blindness, selection of treatment strategies should be considered and in an individual way. For those possessing multiple risk factors, it will be helpful to predict the prognosis and carry out regular follow-up. During the whole process, a large amount of meaningful data is generated. Since the key components of developing an AI application can be concluded as “MDT,” that is “Model, Data and Target”, the abundant datasets make it possible for AI to assist in many tasks of myopia management based on each type of data modality. At nearly all steps of the clinical practice mentioned above, a considerable amount of clinical data is generated. These data can include basic ophthalmologic information, such as cycloplegic and noncycloplegic refraction, axial length (AL), corneal curvature radius (CR), best-corrected visual acuity (BCVA), and intraocular pressure (IOP); behavioral and environmental data, such as eye habits, reading distance, illumination conditions, and outdoor activity; and personal information related to diseases, such as demographics, heredity, and psychological state. All of the basic ophthalmologic information is numerical data, which is a data type expressed in quantitative numbers. For others, most of them are categorical data, a collection of information that is divided into groups and can take on numerical values, although meaningless. In brief, all of these clinical data can be expressed in a numeric form, which is different from imaging data generated from fundus examination. Considering the size, especially for a complex disease such as myopia where numerous codependent factors are involved in the causes, epidemiology, diagnostics, and progression, it is almost impossible to manually analyze the clinical data. Therefore, ML methods, with the capability of handling large amounts of data in a nonlinear way and extracting large numbers of potential predictive parameters, even when it outnumbers observations , are suitable for applications in myopia (Table ). Applications based on this type of data mainly include prognosis prediction, refractive surgery assistance, and remote monitoring. Prognosis prediction By constructing risk models with various variables in this data modality, many studies have determined the capability of ML methods for prognosis prediction. The random forest model is shown to predict the onset of high myopia at 18 years of age as early as 8 years in advance at a clinically acceptable accuracy by using long-term refraction data . Comprehensively assessing the physiological elongation of axial length, a key indicator for high myopia, by SVM and GBRT, instead of mydriatic optometry can be used to predict myopia progression . Additionally, the probability produced by these models can help persuade patients for further referral . Refractive surgery ML methods have been applied to eye parameters obtained from advanced instruments to screen candidates for refractive surgery , detect corneal ectasia susceptibility , and distinguish healthy corneas from diseased one . One of the earliest studies on AI applications in myopia focused on using ML methods and data collected by the GALILEI Dual Scheimpflug Analyzer to automate the detection of subclinical keratoconus, which is a contraindication for refractive surgery . Additionally, different ML models trained on medical records of myopia patients have been reported to improve the accuracy of intraocular lens (IOLs) power selection, which is crucial for reducing postoperative refraction errors, especially in highly myopic eyes that have undergone cataract surgery. The integration of AI into IOL power calculation formulas, such as Hill-RBF3.0 and Kane, has shown to produce more accurate prediction results compared to traditional formulas, including Barrett Universal II, Haigis, and SRK/T . Monitoring Environmental risk factors, such as working at a close range (< 20 cm) and excessive continuous close working time (> 30 min), are considered to be factors relevant to the development of myopia, and increasing effective outdoor exposure time is an independent protective factor against myopia. However, it is difficult to monitor them widely in the public. Various smart wearable devices have been developed for monitoring working distance or outdoor exposure time, such as RangeLife , FitSight , and Cloud clips . Through the data collected by wearable devices, an SVM model has been trained for distinguishing indoor and outdoor locations . It may further combine with Internet apps, encouraging children to spend more time outdoors . By constructing risk models with various variables in this data modality, many studies have determined the capability of ML methods for prognosis prediction. The random forest model is shown to predict the onset of high myopia at 18 years of age as early as 8 years in advance at a clinically acceptable accuracy by using long-term refraction data . Comprehensively assessing the physiological elongation of axial length, a key indicator for high myopia, by SVM and GBRT, instead of mydriatic optometry can be used to predict myopia progression . Additionally, the probability produced by these models can help persuade patients for further referral . ML methods have been applied to eye parameters obtained from advanced instruments to screen candidates for refractive surgery , detect corneal ectasia susceptibility , and distinguish healthy corneas from diseased one . One of the earliest studies on AI applications in myopia focused on using ML methods and data collected by the GALILEI Dual Scheimpflug Analyzer to automate the detection of subclinical keratoconus, which is a contraindication for refractive surgery . Additionally, different ML models trained on medical records of myopia patients have been reported to improve the accuracy of intraocular lens (IOLs) power selection, which is crucial for reducing postoperative refraction errors, especially in highly myopic eyes that have undergone cataract surgery. The integration of AI into IOL power calculation formulas, such as Hill-RBF3.0 and Kane, has shown to produce more accurate prediction results compared to traditional formulas, including Barrett Universal II, Haigis, and SRK/T . Environmental risk factors, such as working at a close range (< 20 cm) and excessive continuous close working time (> 30 min), are considered to be factors relevant to the development of myopia, and increasing effective outdoor exposure time is an independent protective factor against myopia. However, it is difficult to monitor them widely in the public. Various smart wearable devices have been developed for monitoring working distance or outdoor exposure time, such as RangeLife , FitSight , and Cloud clips . Through the data collected by wearable devices, an SVM model has been trained for distinguishing indoor and outdoor locations . It may further combine with Internet apps, encouraging children to spend more time outdoors . Fundus examination, which is recommended annually in high myopes, provides a visualization of both the central and peripheral retina under dilation and generates a considerable amount of imaging data. Among the different imaging methods, fundus photography (FP) and optical coherence tomography (OCT) are most commonly used for the assessment of myopia-related fundus changes. By fundus examination, optic disc tilt and arc-shaped spots can be found in simple high myopia. For the fundus of pathologic myopia, a severe form of high myopia, posterior staphyloma, myopic traction maculopathy (MTM), myopic choroidal neovascularization (mCNV), dome-shape macula (DSM), and high myopia-related optic neuropathy can be seen, with a high specificity. These pathologic changes usually lead to irreversible damage to the retina, choroid, and other tissues, which will seriously affect the visual function of patients, but may present insidiously. Thus, timely imaging as well as accurate interpretation with the help of AI is important in detecting early complications and monitoring progression . Fundus photography-based applications FP is routinely ordered in a wide variety of ophthalmic conditions . It documents the retina, macula, optic nerve, and main retinal blood vessels in our eyes by using a highly specialized camera with high-powered lenses designed to visualize the pattern of the back of the eye . It is often referred to as retinal fundus photography (RFP), highlighting the fact that an ophthalmologist’s primary goal is typically to identify the appearance of the retina. Based on this data modality, several studies have reported the use of machine learning methods for myopia and associated complications, as shown in Table . Unlike prognosis prediction, which is based on clinical data, the prediction task based on FP images mainly aims at predicting refractive error with ResNet , a famous CNN model used for feature extraction, which is surprising given that this was not a task thought to be possible manually. This might be useful for studying possible morphological changes in myopic eyes and can also help in epidemiologic research of myopia from large fundus image datasets where refraction labels are unavailable. In addition, fundus images can be investigated by fully convolutional networks (FCNs), a model modified from CNNs, and the semantic segmentation, or pixelwise classification, of these myopia-related fundus changes is possible . For different types of myopic maculopathy, CNN-based models have been exploited to perform the classification task according to the META-PM classification system . In addition to private datasets, the utilization of a public dataset for training UNet +  + to detect pathologic myopia and highlight the areas of lesions is also possible . Traditional fundus cameras only capture images at an angle of 30 to 60° . Therefore, combined with UWF imaging techniques, a novel form of FP, namely UWF fundus images (UWF-FP), enables ophthalmologists to observe the peripheral retina without pupillary dilation , with up to a 200° view of the ocular fundus in a single exposure. The employment of artificial intelligence in this data modality has achieved promising results, such as detecting glaucomatous optic neuropathy , identifying lattice degeneration , and even screening anemia . For myopia, UWF-FP enables ophthalmologists to screen notable peripheral retinal lesions (NPRLs), the clinically significant peripheral retinal lesions that are more frequently seen in myopic eyes than normal eyes . If kept untreated, patients with NPRLs will likely result in rhegmatogenous retinal detachment (RRD), an important cause of visual loss . Based on the peripheral retinal information provided by UWF-FP, a customized CNN network has achieved satisfying accuracy in automatically identifying the NPRLs. Similar to FP, UWF-FP can also be used by CNN-based models to predict refractive error, with an MAE of predicted spherical equivalent (SE) equal to 1.1150D . Although surprisingly, this accuracy is inferior to the result of Varadarajan et al. , which is based on FP. This might result from the difference in the quality and quantity of the training and validation datasets. Further comparison of the performance of AI applications between FP and UWF-FP is needed. Optical coherence tomography-based applications To analyze the fundus changes associated with myopia, OCT is another widely used method. It is carried out for detecting myopia-related vision-threatening conditions, such as retinal detachment, pathological mCNV, macular hole, and retinoschisis . The characteristics of OCT enable ophthalmologists to see a myriad of pathologies in the anterior and posterior segments of myopic eyes, including the cornea, sclera, anterior chamber, vitreous, choroid, retina, and optic nerve, which can only be seen in enucleated eyes before . Based on OCT, deep learning has been extensively studied for the detection of AMD and glaucoma . Regarding myopia and associated complications, 12 studies reported the use of CNN-based models (Table ). It can help ophthalmologists identify myopic maculopathy in patients with high myopia and the presence of pathologic myopia . Four vision-threatening conditions associated with myopia can also be automatically detected with InceptionResNetV2 . Since OCT images contain layer information, which is its unique characteristic, studies have demonstrated the potential for segmenting and analyzing the choroidal sublayers by using U-Net and mask R-CNN , and further utilization of this in myopia is expected. It can also be helpful in automatic screening for high myopia and estimating uncorrected refractive error . Apart from applications that produce actual outputs, there are other ways to use AI methods in myopia. The ATN classification and grading system is a widely applicable clinical diagnostic criterion for myopic maculopathy . While atrophy (A) can be judged based on FP only, determining the categories of traction (T) and neovascularization (N) requires FP together with OCT images. Apparently, OCT examination is much more difficult to adopt than FP. Therefore, a study built a multibranch ResNet with FP and OCT images to achieve ATN grades based on FP only, and the performance was superior to that of ophthalmologists who are not retinal specialists . FP is routinely ordered in a wide variety of ophthalmic conditions . It documents the retina, macula, optic nerve, and main retinal blood vessels in our eyes by using a highly specialized camera with high-powered lenses designed to visualize the pattern of the back of the eye . It is often referred to as retinal fundus photography (RFP), highlighting the fact that an ophthalmologist’s primary goal is typically to identify the appearance of the retina. Based on this data modality, several studies have reported the use of machine learning methods for myopia and associated complications, as shown in Table . Unlike prognosis prediction, which is based on clinical data, the prediction task based on FP images mainly aims at predicting refractive error with ResNet , a famous CNN model used for feature extraction, which is surprising given that this was not a task thought to be possible manually. This might be useful for studying possible morphological changes in myopic eyes and can also help in epidemiologic research of myopia from large fundus image datasets where refraction labels are unavailable. In addition, fundus images can be investigated by fully convolutional networks (FCNs), a model modified from CNNs, and the semantic segmentation, or pixelwise classification, of these myopia-related fundus changes is possible . For different types of myopic maculopathy, CNN-based models have been exploited to perform the classification task according to the META-PM classification system . In addition to private datasets, the utilization of a public dataset for training UNet +  + to detect pathologic myopia and highlight the areas of lesions is also possible . Traditional fundus cameras only capture images at an angle of 30 to 60° . Therefore, combined with UWF imaging techniques, a novel form of FP, namely UWF fundus images (UWF-FP), enables ophthalmologists to observe the peripheral retina without pupillary dilation , with up to a 200° view of the ocular fundus in a single exposure. The employment of artificial intelligence in this data modality has achieved promising results, such as detecting glaucomatous optic neuropathy , identifying lattice degeneration , and even screening anemia . For myopia, UWF-FP enables ophthalmologists to screen notable peripheral retinal lesions (NPRLs), the clinically significant peripheral retinal lesions that are more frequently seen in myopic eyes than normal eyes . If kept untreated, patients with NPRLs will likely result in rhegmatogenous retinal detachment (RRD), an important cause of visual loss . Based on the peripheral retinal information provided by UWF-FP, a customized CNN network has achieved satisfying accuracy in automatically identifying the NPRLs. Similar to FP, UWF-FP can also be used by CNN-based models to predict refractive error, with an MAE of predicted spherical equivalent (SE) equal to 1.1150D . Although surprisingly, this accuracy is inferior to the result of Varadarajan et al. , which is based on FP. This might result from the difference in the quality and quantity of the training and validation datasets. Further comparison of the performance of AI applications between FP and UWF-FP is needed. To analyze the fundus changes associated with myopia, OCT is another widely used method. It is carried out for detecting myopia-related vision-threatening conditions, such as retinal detachment, pathological mCNV, macular hole, and retinoschisis . The characteristics of OCT enable ophthalmologists to see a myriad of pathologies in the anterior and posterior segments of myopic eyes, including the cornea, sclera, anterior chamber, vitreous, choroid, retina, and optic nerve, which can only be seen in enucleated eyes before . Based on OCT, deep learning has been extensively studied for the detection of AMD and glaucoma . Regarding myopia and associated complications, 12 studies reported the use of CNN-based models (Table ). It can help ophthalmologists identify myopic maculopathy in patients with high myopia and the presence of pathologic myopia . Four vision-threatening conditions associated with myopia can also be automatically detected with InceptionResNetV2 . Since OCT images contain layer information, which is its unique characteristic, studies have demonstrated the potential for segmenting and analyzing the choroidal sublayers by using U-Net and mask R-CNN , and further utilization of this in myopia is expected. It can also be helpful in automatic screening for high myopia and estimating uncorrected refractive error . Apart from applications that produce actual outputs, there are other ways to use AI methods in myopia. The ATN classification and grading system is a widely applicable clinical diagnostic criterion for myopic maculopathy . While atrophy (A) can be judged based on FP only, determining the categories of traction (T) and neovascularization (N) requires FP together with OCT images. Apparently, OCT examination is much more difficult to adopt than FP. Therefore, a study built a multibranch ResNet with FP and OCT images to achieve ATN grades based on FP only, and the performance was superior to that of ophthalmologists who are not retinal specialists . Despite the reported effective implementation of AI in the clinical practice of myopia, problems and roadblocks remain. Prior to the general adoption of AI, critical technical and clinical restrictions must be overcome. Establishment of a solid data foundation The quality and quantity of data are extremely important to the applications of AI. The majority of the aforementioned AI applications in myopia use datasets collected by ophthalmologists during their clinical practice, which are usually on one or a few population groups exclusively. This might result in poor generalizability and makes it difficult to determine whether the poor performance is attributable to spectrum bias . The disparity of imaging systems, discrepancy in imaging and postprocessing protocols, and lack of computing power also hinder the implementation of these algorithms into clinical practice. People who are in low-resource environments are frequently undercounted because it can be challenging to obtain medical attention and thus capture their data . In this sense, public ophthalmological datasets are essential and provide an equal platform for comparing the outcomes of AI models in ophthalmology. There are some popular public datasets established by ophthalmologists from multiple centers for other ophthalmopathies, but none of them focus on myopia alone . AI research in myopia might consider the feasibility of utilizing these public datasets in the future. The establishment of a large-scale public myopia dataset is also possible with novel AI technologies. A generative adversarial network (GAN) can be used in the generation of a large number of random and diverse images, and You et al. determined its application on FP and OCT images in ophthalmology, offering a new way to enlarge datasets. Federated learning and swarm learning have emerged as potential methods to cope with privacy problems, providing a decentralized and secure method of data management. Handle multitasks with multimodal data The complexity of clinical manifestations in diseases and the diversity of data obtained through different examination modalities present a challenge in AI applications. In myopia, current AI models are typically designed for specific data modalities and purposes, resulting in high accuracy in distinguishing between “disease-free” and “diseased” cases, but poor performance in more complex tasks such as distinguishing between multiple diseases . This challenge arises from the fact that some pathological changes in myopia can also occur in other ophthalmic conditions. One approach to address this is to exclude patients with comorbidities or group together a range of diseases. Alternatively, multimodal medical data fusion techniques can be employed by extracting relevant features from images and processing them with AI algorithms . These features are not limited to geometric measurements but may also include characteristic lesion regions . Additionally, image data can be processed to obtain a virtual score, which can be predicted together with clinical data by AI algorithms . While new developments in this area continue to emerge, it is important to note that multimodal data do not consistently outperform unimodal data, as demonstrated in a study on diabetic retinopathy staging . Explore the potential of novel data modalities Novel forms of OCT images, such as UWF optical coherence tomography (UWF OCT) and OCT angiography (OCTA), can be seen in the clinical practice of myopia. UWF OCT, instead of the traditionally used 3D-MRI, can be helpful for better visualization of the posterior staphyloma in myopic eyes , which is “an outpouching of the wall of the eye with a radius of curvature less than the radius of curvature of the surrounding eye wall” and usually results in poorer vision and more anatomical anomalies . OCTA is very helpful for detecting retinal microvasculature in a noninvasive and depth-resolved way , thus providing a way of detecting mCNV with high sensitivity and specificity. Based on UWF OCT and OCTA, AI research is still restricted to improving the quality of images, such as image reconstruction and denoising , and there is currently a dearth of research exploiting the possibility of developing DL models to detect posterior staphyloma or mCNV. The quality and quantity of data are extremely important to the applications of AI. The majority of the aforementioned AI applications in myopia use datasets collected by ophthalmologists during their clinical practice, which are usually on one or a few population groups exclusively. This might result in poor generalizability and makes it difficult to determine whether the poor performance is attributable to spectrum bias . The disparity of imaging systems, discrepancy in imaging and postprocessing protocols, and lack of computing power also hinder the implementation of these algorithms into clinical practice. People who are in low-resource environments are frequently undercounted because it can be challenging to obtain medical attention and thus capture their data . In this sense, public ophthalmological datasets are essential and provide an equal platform for comparing the outcomes of AI models in ophthalmology. There are some popular public datasets established by ophthalmologists from multiple centers for other ophthalmopathies, but none of them focus on myopia alone . AI research in myopia might consider the feasibility of utilizing these public datasets in the future. The establishment of a large-scale public myopia dataset is also possible with novel AI technologies. A generative adversarial network (GAN) can be used in the generation of a large number of random and diverse images, and You et al. determined its application on FP and OCT images in ophthalmology, offering a new way to enlarge datasets. Federated learning and swarm learning have emerged as potential methods to cope with privacy problems, providing a decentralized and secure method of data management. The complexity of clinical manifestations in diseases and the diversity of data obtained through different examination modalities present a challenge in AI applications. In myopia, current AI models are typically designed for specific data modalities and purposes, resulting in high accuracy in distinguishing between “disease-free” and “diseased” cases, but poor performance in more complex tasks such as distinguishing between multiple diseases . This challenge arises from the fact that some pathological changes in myopia can also occur in other ophthalmic conditions. One approach to address this is to exclude patients with comorbidities or group together a range of diseases. Alternatively, multimodal medical data fusion techniques can be employed by extracting relevant features from images and processing them with AI algorithms . These features are not limited to geometric measurements but may also include characteristic lesion regions . Additionally, image data can be processed to obtain a virtual score, which can be predicted together with clinical data by AI algorithms . While new developments in this area continue to emerge, it is important to note that multimodal data do not consistently outperform unimodal data, as demonstrated in a study on diabetic retinopathy staging . Novel forms of OCT images, such as UWF optical coherence tomography (UWF OCT) and OCT angiography (OCTA), can be seen in the clinical practice of myopia. UWF OCT, instead of the traditionally used 3D-MRI, can be helpful for better visualization of the posterior staphyloma in myopic eyes , which is “an outpouching of the wall of the eye with a radius of curvature less than the radius of curvature of the surrounding eye wall” and usually results in poorer vision and more anatomical anomalies . OCTA is very helpful for detecting retinal microvasculature in a noninvasive and depth-resolved way , thus providing a way of detecting mCNV with high sensitivity and specificity. Based on UWF OCT and OCTA, AI research is still restricted to improving the quality of images, such as image reconstruction and denoising , and there is currently a dearth of research exploiting the possibility of developing DL models to detect posterior staphyloma or mCNV. The advent of AI is expected to transform the management of myopia. The findings of this review suggest that AI has been applied to most parts of the clinical practice of myopia and is built mainly on three types of data: clinical data, FP, and UWF FP, OCT. Image-driven AI applications account for the majority. However, compared with other ophthalmic diseases, AI research in myopia is still in its early stages, and these results are far from clinically viable. It is necessary to establish large public datasets with high quality and improve the capability of handling multimodal input. Exploring novel data modalities, designing advanced algorithms, and finding additional application scenarios could also be of great significance.
Identification of
5dfb559c-4531-4aac-971f-8538c0320bd4
11878014
Biochemistry[mh]
Members of the genus Elizabethkingia are known to be Gram-negative, aerobic, non-fermenting, nonmotile, and rod-shaped bacteria . A bacterium of the genus was first found in 1959 by Elizabeth O. King and was described as “ Flavobacterium meningosepticum ” . Now, the relevant taxa were transferred to Elizabethkingia , in which seven species are validly published as E. anophelis , E. argenteiflava , E. bruuniana , E. meningoseptica , E. miricola , E. ursingii , and E. occulta . E. anophelis was first isolated from the midgut of Anopheles gambiae mosquitos in Africa in 2011 ; E. argenteiflava from soybean in South Korea in 2015 ; E. bruuniana from human blood in the United Kingdom in 1975 ; E. meningoseptica from cerebrospinal fluid, blood, and throat of infants in the United States in 1959 ; E. miricola from condensation water on the Mir space station of Russia in 1997 ; E. ursingii from soil in Denmark in 1964 ; and E. occulta from human sputum in Australia in 1977 . Clinical strains of the known Elizabethkingia species, which are isolated from respiratory, blood, and urine samples often cause severe infections in neonatal or immunocompromised patients, with high mortality rates of 25–52% and have caused hospital outbreaks worldwide. E. anophelis infections have caused outbreaks in 10 countries , including Argentina, Australia, China, France, Hong Kong, India, Singapore, South Korea, Taiwan, and the United States. E. meningoseptica infections have caused outbreaks in children in 10 countries , including Brazil, Denmark, India, Israel, Mauritius, Norway, Taiwan, Turkey, the United Kingdom, and the United States. E. miricola infections have caused an outbreak in Spain . Other known Elizabethkingia species, except for E. argenteiflava , have also been isolated from human specimens . The majority of known clinical strains of Elizabethkingia species are resistant to all β-lactams due to intrinsic metallo-β-lactamases and extended-spectrum β-lactamase, to aminoglycosides by aminoglycoside-6-adenyl transferases, and to colistin by intrinsic phosphoethanolamine transferase . The susceptibility patterns to fluoroquinolones and trimethoprim/sulfamethoxazole vary among Elizabethkingia strains . Distinguishing the known species of the genus Elizabethkingia by susceptibility patterns is challenging and no first-line antibiotics are currently available. In clinical laboratories, Elizabethkingia species are identified using biochemical methods such as API/ID32 Phenotyping Kits (bioMérieux, Marcy l’Etoile, France), Phoenix 100 ID/AST Automated Microbiology System (Becton Dickinson Co., Sparks, MD, USA), and Vitek 2 Automated Identification System (bioMérieux); mass spectrometry methods such as Vitek MS (bioMérieux) and Bruker Biotyper MS (Bruker Daltonics GmbH, Bremen, Germany); and by sequencing of 16S rRNA gene or whole-genomes . While whole-genome sequencing (WGS) can correctly identify Elizabethkingia at the species level, it is not routinely used due to its high cost and time-consuming procedures. Correct identification of Elizabethkingia at the species level is often vital to distinguish and to prevent hospital outbreaks. Up to now, the epidemiological information on Elizabethkingia species is not sufficient; therefore, it is important to accumulate surveillance data from clinical laboratories. In this study, we employed the matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS)-based microbial identification method and explored biomarker mass peaks for distinguishing the species of Elizabethkingia . Detected biomarker mass peaks were annotated based on the genomically predicted protein mass database (GPMsDB) , in which theoretical protein masses were predicted based on genome sequences, including those of the genus Elizabethkingia . Bacterial strains The type strains of the known species of Elizabethkingia were used to obtain MALDI-TOF MS mass spectra. The type strain of E. meningoseptica NBRC 12535 T was obtained from Biological Resource Center, National Institute of Technology and Evaluation (Tokyo, Japan); E. anophelis NCTC 13869 T from National Collection of Type Cultures (Salisbury, the United Kingdom); E. miricola JCM 11413 T and E. argenteiflava JCM 32097 T from RIKEN BRC (Tsukuba, Ibaraki, Japan); and E. bruuniana CIP 111191 T , E. ursingii CIP 111192 T , and E. occulta CIP 111193 T from Collection de l'Institut Pasteur (Paris, France). In addition, E. meningoseptica CCUG 69507 and CCUG 69515; E. miricola CCUG 69494 and CCUG 69519; E. bruuniana CCUG 69504, CCUG 69513, and CCUG 69522; E. ursingii CCUG 69498 and CCUG 69517; and E. occulta CCUG 69497 were obtained as additional reference strains from Culture Collection University of Göteborg (CCUG) (Göteborg, Sweden). Eight clinical strains of Elizabethkingia were obtained from eight patients treated at Juntendo University Hospital in Japan between August 2015 and January 2024. Three strains of Elizabethkingia were obtained from mosquitos in Japan from August to September 2023, and one strain from a mosquito in Thailand in July 2023 . These strains were cultured on 5% sheep blood agar (Becton, Dickinson-Diagnostic Systems) under aerobic conditions at 35°C. WGS Genomic DNA of the eight clinical and four environmental strains of Elizabethkingia species were extracted using DNeasy blood and tissue kits (Qiagen, Tokyo, Japan). DNA libraries were prepared using Nextera XT DNA Library Prep Kit (Illumina, San Diego, CA, USA). The genomes were sequenced on the Illumina MiSeq platform using v3 chemistry (600 cycles) or the Illumina MiniSeq platform (300 cycles), and the summary of the assembly is shown in . Raw reads for each strain were trimmed and assembled using CLC Genomic Workbench version 10.0.1 (CLC bio, Aarhus, Denmark). Genome relatedness of the relevant strains was estimated using an average nucleotide identity (ANI) calculator , a Type (Strain) Genome Sever ( https://tygs.dsmz.de ), and 16S rRNA gene sequences. ANI values and 16S rRNA gene sequence identities were calculated using reference genomes of E. anophelis (NCTC 13869 T ; genome accession number GCF_002023665 ), E. argenteiflava (JCM 32097 T ; GCF_009904105 ), E. bruuniana (CIP 111191 T ; GCF_002024805) , E. meningoseptica (NBRC 12535 T ; GCF_900475375 ), E. miricola (JCM 11413 T ; GCF_008124555 ), E. ursingii (CIP 111192 T ; GCF_001521765 ), and E. occulta (CIP 111193 T ; GCF_002023715 ). Phylogenetic analysis Genome completeness and contamination were assessed using CheckM2 v1.0.1 with lineage_wf and default settings . A phylogenetic tree was constructed using the kSNP4 software based on the pangenome SNPs ( https://sourceforge.net/projects/ksnp/ ) , and visualized using iTol ver.6 ( https://itol.embl.de/ ). The strains of E. anophelis (NCTC 13869 T ; GCF_002023665), E. argenteiflava (JCM 32097 T ; GCF_009904105), E. bruuniana (CIP 111191 T ; GCF_002024805, CCUG 69504; GCF_002023775, and CCUG 69513; GCF_002023765), E. meningoseptica (NBRC 12535 T ; GCF_900475375, CCUG 69507; GCF_002022105 and CCUG 69515; GCF_002023305), E. miricola (JCM 11413 T ; GCF_008124555, CCUG 69494; GCF_001521745 and CCUG 69519; GCF_002023515), E. ursingii (CIP 111192 T ; GCF_001521765, CCUG 69498; GCF_002023365 and CCUG 69517; GCF_002023405), and E. occulta (CIP 111193 T ; GCF_002023715 and CCUG 69497; GCF_002023385) were used as reference strains. Calculation of the theoretical mass of Elizabethkingia species for MALDI-TOF MS proteotyping Theoretical masses of proteins encoded in the genomes of Elizabethkingia were calculated for the following genomes as part of the development of a GPMsDB: E. anophelis (NCTC 13869 T ; genome accession number GCF_002023665 ), E. argenteiflava (JCM 32097 T ; GCF_009904105 ), E. bruuniana (CIP 111191 T ; GCF_002024805 ), E. meningoseptica (NBRC 12535 T ; GCF_900475375 ), E. miricola (JCM 11413 T ; GCF_008124555 ), E. ursingii (CIP 111192 T ; GCF_001521765 ), and E. occulta (CIP 111193 T ; GCF_002023715 ) . The genome sequences were obtained from the NCBI database ( https://www.ncbi.nlm.nih.gov/ ). Gene prediction from the genomes obtained in this study was performed using GPMsDB-dbtk v1.0.1 ( https://github.com/ysekig/GPMsDB-dbtk ) . Bacterial sample preparation for MALDI-TOF MS Alpha-cyano-4-hydroxycinnamic acid (CHCA) was used as a matrix. To prepare this matrix solution, 10 mg of 4-CHCA was dissolved in 1 mL of solvent consisting of 1% (vol/vol) trifluoroacetic acid, 35% (vol/vol) ethanol, 15% (vol/vol) acetonitrile, and milliQ water. A full loop of bacterial cells was dispersed in 200 µL of distilled water in a microtube and mixed with 800 µL of ethanol. The suspensions were briefly vortexed and centrifuged at 15,000 × g for 2 min. The pellets were then dried for 5 min. The pellets were suspended in 50 µL of 70% formic acid, vortexed, suspended in 50 µL of acetonitrile, and centrifuged at 15,000 × g for 2 min. Supernatants were analyzed by MALDI-TOF MS according to the manufacturer’s instructions. MALDI-TOF MS measurement MALDI-TOF MS measurements were performed in positive linear mode using MALDI-8020 RUO (Shimadzu Corporation, Japan) and Microflex LT/SH (Bruker Daltonics, Germany) equipped with a 200 Hz Nd:YAG laser (355 nm) and 60 Hz nitrogen laser (337 nm), respectively. Before sample analysis, the MALDI-TOF MS instrument was mass-calibrated externally using six peaks with m/z 4,365.4, 5,381.4, 6,411.6, 7,274.0, 8,369.8, and 10,300.1 from Escherichia coli DH5α. Five individual mass spectra were acquired for each bacterial extract in the range of m/z 2,000–20,000. Peak assignment was carried out using eMSTAT Solution software (Shimadzu Corp.). Species identification was performed by MBT Compass 4.1 with Microflex LT/SH (Bruker Daltonics). Cluster analysis For biomarker validation, 29 Elizabethkingia strains, including the seven-type strains were analyzed by MALDI-TOF MS. Five mass spectra were acquired for each strain, and peak lists were extracted from those mass spectra. Biomarker analysis was performed using the binary biomarker matching table, which scored peaks as either 1 or 0, and a dendrogram was constructed using the unweighted pair group method with arithmetic mean using SRplot . The type strains of the known species of Elizabethkingia were used to obtain MALDI-TOF MS mass spectra. The type strain of E. meningoseptica NBRC 12535 T was obtained from Biological Resource Center, National Institute of Technology and Evaluation (Tokyo, Japan); E. anophelis NCTC 13869 T from National Collection of Type Cultures (Salisbury, the United Kingdom); E. miricola JCM 11413 T and E. argenteiflava JCM 32097 T from RIKEN BRC (Tsukuba, Ibaraki, Japan); and E. bruuniana CIP 111191 T , E. ursingii CIP 111192 T , and E. occulta CIP 111193 T from Collection de l'Institut Pasteur (Paris, France). In addition, E. meningoseptica CCUG 69507 and CCUG 69515; E. miricola CCUG 69494 and CCUG 69519; E. bruuniana CCUG 69504, CCUG 69513, and CCUG 69522; E. ursingii CCUG 69498 and CCUG 69517; and E. occulta CCUG 69497 were obtained as additional reference strains from Culture Collection University of Göteborg (CCUG) (Göteborg, Sweden). Eight clinical strains of Elizabethkingia were obtained from eight patients treated at Juntendo University Hospital in Japan between August 2015 and January 2024. Three strains of Elizabethkingia were obtained from mosquitos in Japan from August to September 2023, and one strain from a mosquito in Thailand in July 2023 . These strains were cultured on 5% sheep blood agar (Becton, Dickinson-Diagnostic Systems) under aerobic conditions at 35°C. Genomic DNA of the eight clinical and four environmental strains of Elizabethkingia species were extracted using DNeasy blood and tissue kits (Qiagen, Tokyo, Japan). DNA libraries were prepared using Nextera XT DNA Library Prep Kit (Illumina, San Diego, CA, USA). The genomes were sequenced on the Illumina MiSeq platform using v3 chemistry (600 cycles) or the Illumina MiniSeq platform (300 cycles), and the summary of the assembly is shown in . Raw reads for each strain were trimmed and assembled using CLC Genomic Workbench version 10.0.1 (CLC bio, Aarhus, Denmark). Genome relatedness of the relevant strains was estimated using an average nucleotide identity (ANI) calculator , a Type (Strain) Genome Sever ( https://tygs.dsmz.de ), and 16S rRNA gene sequences. ANI values and 16S rRNA gene sequence identities were calculated using reference genomes of E. anophelis (NCTC 13869 T ; genome accession number GCF_002023665 ), E. argenteiflava (JCM 32097 T ; GCF_009904105 ), E. bruuniana (CIP 111191 T ; GCF_002024805) , E. meningoseptica (NBRC 12535 T ; GCF_900475375 ), E. miricola (JCM 11413 T ; GCF_008124555 ), E. ursingii (CIP 111192 T ; GCF_001521765 ), and E. occulta (CIP 111193 T ; GCF_002023715 ). Genome completeness and contamination were assessed using CheckM2 v1.0.1 with lineage_wf and default settings . A phylogenetic tree was constructed using the kSNP4 software based on the pangenome SNPs ( https://sourceforge.net/projects/ksnp/ ) , and visualized using iTol ver.6 ( https://itol.embl.de/ ). The strains of E. anophelis (NCTC 13869 T ; GCF_002023665), E. argenteiflava (JCM 32097 T ; GCF_009904105), E. bruuniana (CIP 111191 T ; GCF_002024805, CCUG 69504; GCF_002023775, and CCUG 69513; GCF_002023765), E. meningoseptica (NBRC 12535 T ; GCF_900475375, CCUG 69507; GCF_002022105 and CCUG 69515; GCF_002023305), E. miricola (JCM 11413 T ; GCF_008124555, CCUG 69494; GCF_001521745 and CCUG 69519; GCF_002023515), E. ursingii (CIP 111192 T ; GCF_001521765, CCUG 69498; GCF_002023365 and CCUG 69517; GCF_002023405), and E. occulta (CIP 111193 T ; GCF_002023715 and CCUG 69497; GCF_002023385) were used as reference strains. Elizabethkingia species for MALDI-TOF MS proteotyping Theoretical masses of proteins encoded in the genomes of Elizabethkingia were calculated for the following genomes as part of the development of a GPMsDB: E. anophelis (NCTC 13869 T ; genome accession number GCF_002023665 ), E. argenteiflava (JCM 32097 T ; GCF_009904105 ), E. bruuniana (CIP 111191 T ; GCF_002024805 ), E. meningoseptica (NBRC 12535 T ; GCF_900475375 ), E. miricola (JCM 11413 T ; GCF_008124555 ), E. ursingii (CIP 111192 T ; GCF_001521765 ), and E. occulta (CIP 111193 T ; GCF_002023715 ) . The genome sequences were obtained from the NCBI database ( https://www.ncbi.nlm.nih.gov/ ). Gene prediction from the genomes obtained in this study was performed using GPMsDB-dbtk v1.0.1 ( https://github.com/ysekig/GPMsDB-dbtk ) . Alpha-cyano-4-hydroxycinnamic acid (CHCA) was used as a matrix. To prepare this matrix solution, 10 mg of 4-CHCA was dissolved in 1 mL of solvent consisting of 1% (vol/vol) trifluoroacetic acid, 35% (vol/vol) ethanol, 15% (vol/vol) acetonitrile, and milliQ water. A full loop of bacterial cells was dispersed in 200 µL of distilled water in a microtube and mixed with 800 µL of ethanol. The suspensions were briefly vortexed and centrifuged at 15,000 × g for 2 min. The pellets were then dried for 5 min. The pellets were suspended in 50 µL of 70% formic acid, vortexed, suspended in 50 µL of acetonitrile, and centrifuged at 15,000 × g for 2 min. Supernatants were analyzed by MALDI-TOF MS according to the manufacturer’s instructions. MALDI-TOF MS measurements were performed in positive linear mode using MALDI-8020 RUO (Shimadzu Corporation, Japan) and Microflex LT/SH (Bruker Daltonics, Germany) equipped with a 200 Hz Nd:YAG laser (355 nm) and 60 Hz nitrogen laser (337 nm), respectively. Before sample analysis, the MALDI-TOF MS instrument was mass-calibrated externally using six peaks with m/z 4,365.4, 5,381.4, 6,411.6, 7,274.0, 8,369.8, and 10,300.1 from Escherichia coli DH5α. Five individual mass spectra were acquired for each bacterial extract in the range of m/z 2,000–20,000. Peak assignment was carried out using eMSTAT Solution software (Shimadzu Corp.). Species identification was performed by MBT Compass 4.1 with Microflex LT/SH (Bruker Daltonics). For biomarker validation, 29 Elizabethkingia strains, including the seven-type strains were analyzed by MALDI-TOF MS. Five mass spectra were acquired for each strain, and peak lists were extracted from those mass spectra. Biomarker analysis was performed using the binary biomarker matching table, which scored peaks as either 1 or 0, and a dendrogram was constructed using the unweighted pair group method with arithmetic mean using SRplot . Identification of Elizabethkingia strains from clinical and environmental samples by WGS Phylogenetic analysis based on WGS showed that the 29 strains examined in this study (12 test strains and 17 reference strains) belonged to distinct groups representing species-level clades of Elizabethkingia , namely E. anophelis , E. argenteiflava , E. bruuniana , E. meningoseptica , E. miricola , E. ursingii , and E. occulta . Of the 12 test strains, 9 were clustered within the E. anophelis group, 2 were in the E. miricola group, and 1 was in E. meningoseptica . Comparison of Elizabethkingia strains by WGS and 16S rRNA gene sequences The ANI values among Elizabethkingia -type strains were obtained using WGS data. Seven-type strains representing Elizabethkingia species were clearly distinguished by WGS using the ANI calculator with a 95% cutoff value. However, it seems difficult to distinguish E. bruuniana , E. miricola , E. ursingii , and E. occulta strains based on 16S rRNA gene sequencing because of 98.80–99.60% identities among these species . Identification of Elizabethkingia species by MALDI-TOF MS using type strains The mass peaks identified using MALDI-TOF MS with the seven-type strains representing all the known Elizabethkingia species are shown in . The calculated and observed masses are summarized in . In the MALDI-TOF MS profiles, 14 major mass peaks were detected and successfully annotated with predicted protein names. Of these 14 annotated peaks, 11 were predicted as ribosomal subunit proteins, and the remaining three were annotated as histone H1, co-chaperonin GroES, and YtxH domain-containing proteins. As shown in , compared with the theoretical mass peaks of E. anophelis NCTC 13869 T , 12, 5, 9, 5, 6, and 6 peaks were different from the theoretical mass peaks of E. argenteiflava JCM 32097 T , E. bruuniana CIP 111191 T , E. meningoseptica NBRC 12535 T , E. miricola JCM 11413 T , E. ursingii CIP 111192 T , and E. occulta CIP 111193 T , respectively. To distinguish Elizabethkingia at the species level using MALDI-TOF MS, the combination of appropriate peaks of ribosomal L29, L30, S21, and YtxH domain-containing proteins was selected as a biomarker. As shown in and , ribosomal protein L29: The corresponding theoretical peaks were m/z 7,032.1 for E. anophelis NCTC 13869 T , E. bruuniana CIP 111191 T , E. meningoseptica NBRC 12535 T , E. miricola JCM 11413 T , and E. ursingii CIP 111192 T ; m/z 7,088.2 for E. argenteiflava JCM 32097 T ; and m/z 7,060.1 for E. occulta CIP 111193 T . Ribosomal protein L30: The corresponding theoretical peaks were m/z 6,317.5 for E. anophelis NCTC 13869 T ; m/z 6,274.4 for E. argenteiflava JCM 32097 T ; m/z 6,333.5 for E. bruuniana CIP 111191 T , E. miricola JCM 11413 T , E. ursingii CIP 111192 T , and E. occulta CIP 111193 T ; and m/z 6,283.5 for E. meningoseptica NBRC 12535 T . Ribosomal protein S21: The corresponding theoretical peaks were m/z 7,643.0 for E. anophelis NCTC 13869 T ; m/z 7,647.0 for E. argenteiflava JCM 32097 T ; m/z 7,585.9 for E. bruuniana CIP 111191 T , E. miricola JCM 11413 T , E. ursingii CIP 111192 T , and E. occulta CIP 111193 T ; and m/z 7,657.0 for E. meningoseptica NBRC 12535 T . YtxH domain-containing proteins: The corresponding theoretical peaks were m/z 12,125.7 for E. anophelis NCTC 13869 T , E. bruuniana CIP 111191 T , E. miricola JCM 11413 T , and E. occulta CIP 111193 T ; m/z 12,052.8 for E. argenteiflava JCM 32097 T ; m/z 12,106.7 for E. meningoseptica NBRC 12535 T ; and m/z 12,098.7 for E. ursingii CIP 111192 T . Of these 14 biomarker peaks, the amino acid sequences of L29, L30, S21, and YtxH domain-containing proteins were unique among the Elizabethkingia species. Compared to the amino acid sequence of L29 in E. anophelis , there were four substitutions in the amino acid sequence of E. argenteiflava , one in E. ursingii , and one in E. occulta . Compared to the amino acid sequence of L30 in E. anophelis , there were four substitutions in E. argenteiflava , one in E. bruuniana , four in E. meningoseptica , one in E. miricola , one in E. ursingii , and one in E. occulta . Compared to the amino acid sequence of S21 in E. anophelis , there were seven substitutions in E. argenteiflava , one in E. bruuniana , one in E. meningoseptica , one in E. miricola , one in E. ursingii , and one in E. occulta . Compared to the amino acid sequence of YtxH domain-containing proteins of E. anophelis , there were 22 substitutions in E. argenteiflava , 9 in E. meningoseptica , and 1 in E. ursingii . MALDI-TOF MS proteotyping was able to distinguish E. anophelis , E. argenteiflava , E. bruuniana/E. miricola , E. meningoseptica , E. ursingii , and E. occulta using the four biomarkers, including L29, L30, S21, and YtxH domain-containing proteins. However, E. bruuniana and E. miricola were not well separated by MALDI-TOF MS proteotyping. Assessment using clinical, environmental, and reference strains The analyses of L29, L30, S21, and YtxH domain-containing proteins were evaluated using 29 clinical and environmental strains of Elizabethkingia species by MALDI-8020 and Microflex LT/SH . The identification using Microflex LT/SH with the Biotyper database revealed that the strains of E. anophelis , E. meningoseptica , and E. miricola could identify correctly, but the other strains, including E. argenteiflava , E. bruuniana , E. ursingii , and E. occulta , could not identify at the species level . Nine E. anophelis strains had identical peaks to the theoretical masses at m/z 7,032.1 for L29; m/z 6,317.5 for L30; m/z 7,643.0 for S21; and m/z 12,125.7 for YtxH domain-containing proteins. Three E. bruuniana and four E. miricola strains had identical peaks to the theoretical masses at m/z 7,032.1 for L29; m/z 6,333.5 for L30; m/z 7,585.9 for S21; and m/z 12,125.7 for YtxH domain-containing proteins. Three E. meningoseptica strains had identical peaks to the theoretical masses at m/z 7,032.1 for L29; m/z 6,283.5 for L30; m/z 7,657.0 for S21; and m/z 12,106.7 for YtxH domain-containing proteins. Two E. ursingii strains had identical peaks to the theoretical masses at m/z 7,032.1 for L29; m/z 6,333.5 for L30; m/z 7,585.9 for S21; and m/z 12,098.7 for YtxH domain-containing proteins. One E. occulta strain had identical peaks to the theoretical masses at m/z 7,060.1 for L29; m/z 6,333.5 for L30; m/z 7,585.9 for S21; and m/z 12,125.7 for YtxH domain-containing proteins. All 29 strains, including 9 E. anophelis , 3 E. bruuniana , 3 E. meningoseptica , 4 E. miricola , 2 E. ursingii , and 1 E. occulta , had identical peaks of the four biomarkers for each type strain. A cluster analysis using the MALDI-TOF MS data revealed that Elizabethkingia species were classified into six groups, including E. anophelis , E. argenteiflava , E. bruuniana/E. miricola , E. meningoseptica , E. ursingii , and E. occulta . It seems difficult to distinguish E. bruuniana and E. miricola strains based on the MALDI-TOF MS profiles. Elizabethkingia strains from clinical and environmental samples by WGS Phylogenetic analysis based on WGS showed that the 29 strains examined in this study (12 test strains and 17 reference strains) belonged to distinct groups representing species-level clades of Elizabethkingia , namely E. anophelis , E. argenteiflava , E. bruuniana , E. meningoseptica , E. miricola , E. ursingii , and E. occulta . Of the 12 test strains, 9 were clustered within the E. anophelis group, 2 were in the E. miricola group, and 1 was in E. meningoseptica . Elizabethkingia strains by WGS and 16S rRNA gene sequences The ANI values among Elizabethkingia -type strains were obtained using WGS data. Seven-type strains representing Elizabethkingia species were clearly distinguished by WGS using the ANI calculator with a 95% cutoff value. However, it seems difficult to distinguish E. bruuniana , E. miricola , E. ursingii , and E. occulta strains based on 16S rRNA gene sequencing because of 98.80–99.60% identities among these species . Elizabethkingia species by MALDI-TOF MS using type strains The mass peaks identified using MALDI-TOF MS with the seven-type strains representing all the known Elizabethkingia species are shown in . The calculated and observed masses are summarized in . In the MALDI-TOF MS profiles, 14 major mass peaks were detected and successfully annotated with predicted protein names. Of these 14 annotated peaks, 11 were predicted as ribosomal subunit proteins, and the remaining three were annotated as histone H1, co-chaperonin GroES, and YtxH domain-containing proteins. As shown in , compared with the theoretical mass peaks of E. anophelis NCTC 13869 T , 12, 5, 9, 5, 6, and 6 peaks were different from the theoretical mass peaks of E. argenteiflava JCM 32097 T , E. bruuniana CIP 111191 T , E. meningoseptica NBRC 12535 T , E. miricola JCM 11413 T , E. ursingii CIP 111192 T , and E. occulta CIP 111193 T , respectively. To distinguish Elizabethkingia at the species level using MALDI-TOF MS, the combination of appropriate peaks of ribosomal L29, L30, S21, and YtxH domain-containing proteins was selected as a biomarker. As shown in and , ribosomal protein L29: The corresponding theoretical peaks were m/z 7,032.1 for E. anophelis NCTC 13869 T , E. bruuniana CIP 111191 T , E. meningoseptica NBRC 12535 T , E. miricola JCM 11413 T , and E. ursingii CIP 111192 T ; m/z 7,088.2 for E. argenteiflava JCM 32097 T ; and m/z 7,060.1 for E. occulta CIP 111193 T . Ribosomal protein L30: The corresponding theoretical peaks were m/z 6,317.5 for E. anophelis NCTC 13869 T ; m/z 6,274.4 for E. argenteiflava JCM 32097 T ; m/z 6,333.5 for E. bruuniana CIP 111191 T , E. miricola JCM 11413 T , E. ursingii CIP 111192 T , and E. occulta CIP 111193 T ; and m/z 6,283.5 for E. meningoseptica NBRC 12535 T . Ribosomal protein S21: The corresponding theoretical peaks were m/z 7,643.0 for E. anophelis NCTC 13869 T ; m/z 7,647.0 for E. argenteiflava JCM 32097 T ; m/z 7,585.9 for E. bruuniana CIP 111191 T , E. miricola JCM 11413 T , E. ursingii CIP 111192 T , and E. occulta CIP 111193 T ; and m/z 7,657.0 for E. meningoseptica NBRC 12535 T . YtxH domain-containing proteins: The corresponding theoretical peaks were m/z 12,125.7 for E. anophelis NCTC 13869 T , E. bruuniana CIP 111191 T , E. miricola JCM 11413 T , and E. occulta CIP 111193 T ; m/z 12,052.8 for E. argenteiflava JCM 32097 T ; m/z 12,106.7 for E. meningoseptica NBRC 12535 T ; and m/z 12,098.7 for E. ursingii CIP 111192 T . Of these 14 biomarker peaks, the amino acid sequences of L29, L30, S21, and YtxH domain-containing proteins were unique among the Elizabethkingia species. Compared to the amino acid sequence of L29 in E. anophelis , there were four substitutions in the amino acid sequence of E. argenteiflava , one in E. ursingii , and one in E. occulta . Compared to the amino acid sequence of L30 in E. anophelis , there were four substitutions in E. argenteiflava , one in E. bruuniana , four in E. meningoseptica , one in E. miricola , one in E. ursingii , and one in E. occulta . Compared to the amino acid sequence of S21 in E. anophelis , there were seven substitutions in E. argenteiflava , one in E. bruuniana , one in E. meningoseptica , one in E. miricola , one in E. ursingii , and one in E. occulta . Compared to the amino acid sequence of YtxH domain-containing proteins of E. anophelis , there were 22 substitutions in E. argenteiflava , 9 in E. meningoseptica , and 1 in E. ursingii . MALDI-TOF MS proteotyping was able to distinguish E. anophelis , E. argenteiflava , E. bruuniana/E. miricola , E. meningoseptica , E. ursingii , and E. occulta using the four biomarkers, including L29, L30, S21, and YtxH domain-containing proteins. However, E. bruuniana and E. miricola were not well separated by MALDI-TOF MS proteotyping. The analyses of L29, L30, S21, and YtxH domain-containing proteins were evaluated using 29 clinical and environmental strains of Elizabethkingia species by MALDI-8020 and Microflex LT/SH . The identification using Microflex LT/SH with the Biotyper database revealed that the strains of E. anophelis , E. meningoseptica , and E. miricola could identify correctly, but the other strains, including E. argenteiflava , E. bruuniana , E. ursingii , and E. occulta , could not identify at the species level . Nine E. anophelis strains had identical peaks to the theoretical masses at m/z 7,032.1 for L29; m/z 6,317.5 for L30; m/z 7,643.0 for S21; and m/z 12,125.7 for YtxH domain-containing proteins. Three E. bruuniana and four E. miricola strains had identical peaks to the theoretical masses at m/z 7,032.1 for L29; m/z 6,333.5 for L30; m/z 7,585.9 for S21; and m/z 12,125.7 for YtxH domain-containing proteins. Three E. meningoseptica strains had identical peaks to the theoretical masses at m/z 7,032.1 for L29; m/z 6,283.5 for L30; m/z 7,657.0 for S21; and m/z 12,106.7 for YtxH domain-containing proteins. Two E. ursingii strains had identical peaks to the theoretical masses at m/z 7,032.1 for L29; m/z 6,333.5 for L30; m/z 7,585.9 for S21; and m/z 12,098.7 for YtxH domain-containing proteins. One E. occulta strain had identical peaks to the theoretical masses at m/z 7,060.1 for L29; m/z 6,333.5 for L30; m/z 7,585.9 for S21; and m/z 12,125.7 for YtxH domain-containing proteins. All 29 strains, including 9 E. anophelis , 3 E. bruuniana , 3 E. meningoseptica , 4 E. miricola , 2 E. ursingii , and 1 E. occulta , had identical peaks of the four biomarkers for each type strain. A cluster analysis using the MALDI-TOF MS data revealed that Elizabethkingia species were classified into six groups, including E. anophelis , E. argenteiflava , E. bruuniana/E. miricola , E. meningoseptica , E. ursingii , and E. occulta . It seems difficult to distinguish E. bruuniana and E. miricola strains based on the MALDI-TOF MS profiles. MALDI-TOF MS proteotyping is useful for both accurate and rapid identification of Elizabethkingia species. Conventional methods, such as API/ID32 Phenotyping Kits (bioMérieux), Phoenix 100 ID/AST Automated Microbiology System (Becton Dickinson), and Vitek 2 Automated Identification System (bioMérieux), require at least 4 h to identify E. anophelis , E. meningoseptica , and E. miricola . While Vitek MS (bioMérieux) and Bruker Biotyper MS (Bruker Daltonics) can identify only three species, including E. anophelis , E. meningoseptica , and E. miricola , because the database of the remaining four species is not included in the two MALDI. Our present study suggests that identification using 16S rRNA gene sequencing is difficult for distinguishing the seven Elizabethkingia species due to high sequence similarities. Lin et al . revealed that the 16S rRNA gene sequences of E. bruuniana , E. miricola , E. occulta , and E. ursingii strains exhibited the highest identity rate with other type strains and could not differentiate by the phylogenetic tree . Only WGS-based identification can distinguish Elizabethkingia species, but it is complicated and time-consuming. Predicting MALDI-TOF MS spectra using the GPMsDB will be useful for exploring of biomarker peaks. In this study, 14 detected peaks were assigned with annotated proteins, and four biomarker peaks, including ribosomal L29, L30, S21, and YtxH domain-containing proteins, were screened as biomarkers for detecting Elizabethkingia species. The peaks likely representing L29, L30, S21, and YtxH domain-containing proteins are useful biomarkers for distinguishing Elizabethkingia at the species level. Cheng et al. reported that m/z 7,643.7 and m/z 10,320.9 are specific peaks for detecting E. anophelis ; m/z 3,141.5 and m/z 12,109.1 for detecting E. meningoseptica ; and m/z 3,792.5 and m/z 7,586.6 for detecting E. miricola and other species . In this study, E. anophelis , E. argenteiflava , E. bruuniana/E. miricola, E. meningoseptica , E. ursingii , and E. occulta could be distinguished using MALDI-TOF MS, but E. bruuniana and E. miricola could not be separated. Previous studies have reported that the phylogenic tree by 16S rRNA gene and rpoB gene sequencing between E. bruuniana and E. miricola are similar, leading to frequent misidentifications . shows the workflow for the rapid identification of Elizabethkingia species by MALDI-TOF MS. Seven Elizabethkingia species are classified into six types: E. anophelis , E. argenteiflava , E. bruuniana/E. miricola , E. meningoseptica , E. ursingii , and E. occulta . This workflow, based on four biomarker peaks, will be useful for identifying Elizabethkingia species. In the future, it is necessary to update this workflow with continuous analysis using the GPMsDB, which assigns taxonomic identification of bacterial and archaeal cells with user-provided MALDI-TOF mass spectrometry profiles . This study has a limitation: only a few clinical and environmental strains were obtained. It is necessary to confirm the findings using more diverse strains obtained from other hospital laboratories in different countries in the future. In conclusion, the detection of the mass peaks of L29, L30, S21, and YtxH domain-containing proteins by MALDI-TOF MS proteotyping will be useful for the accurate and rapid descrimination of Elizabethkingia species.
Case Report: Holistic dental care for a child with Hunter syndrome: Addressing dental ramifications, overcoming challenges, and enhancing quality of life
5ea1cbaa-a058-4fc3-a314-5914265d910a
11134137
Dental[mh]
Hunter syndrome, an X-linked recessive mucopolysaccharide disorder, is characterised by a deficiency in the enzyme iduronate sulfatase, resulting in the accumulation of dermatan and heparan sulphates in various tissues. This disorder, which exhibits both mild and severe forms, is distinguished as an X-linked recessive condition from other autosomal recessive mucopolysaccharide disorders. The clinical manifestations of Hunter syndrome include macrocephaly, developmental delay, dysmorphic facies, skeletal abnormalities, joint contractures, hepatosplenomegaly, cardiac valvular disease, hirsutism, hyperkinesis, and rough behaviour. Inguinal hernias are reported in 60% of male patients, highlighting a characteristic feature of this syndrome. In addition, people with Hunter syndrome often experience umbilical hernias. Frequent otitis media and hearing problems are common secondary systemic manifestations, further illustrating the extensive impact of this disorder on various aspects of health. The oral manifestations are prominent and involve a shortened and broad mandible, radiolucent jaw lesions, flattened temporomandibular joints, macroglossia, peg-shaped teeth with wide spacing, highly arched palate with flattened alveolar ridges, and hyperplastic gingiva. Charles Hunter’s early observations of this disease in children, dating to 1915, highlighted features such as a slowed learning process, tonsil and adenoid issues, severe respiratory infections, and numerous physical abnormalities, often increasing with age. Hunter syndrome is extremely rare, affecting less than 20 cases per million births, and stands out as the rarest form of mucopolysaccharide disorder. It is important to note that the disorder can appear in successive births even without a family history of mucopolysaccharide disorders, making genetic counseling essential. Children with Hunter syndrome often experience severe dental problems, including high dental caries, due to various factors such as malformed teeth, poor oral hygiene, and limited access to dental care. This case report illustrates the challenging but essential task of full-mouth rehabilitation for a child with Hunter syndrome, addressing these dental issues to improve their oral health and overall quality of life. A 4-year-old male patient presented to the Department of Paediatric and Preventive Dentistry at the AB Shetty Memorial Institute of Dental Sciences, Deralakatte, Mangaluru, with the chief complaint of decay in the region of the upper front tooth for the past month. The referral originated from the Department of Pediatrics at Justice KS Hegde Charitable Hospital, with a thorough family history analysis that revealed no hereditary patterns of the disease and parents involved in a non-consanguineous marriage ( ). During the 32nd week of intrauterine development, ultrasound revealed oligohydramnios, resulting in intrauterine growth restriction (IUGR). The subsequent timeline showing the major events is shown in . The patient was diagnosed with Hunter syndrome through biochemical genetic testing at the Centre for DNA Fingerprinting at Kasturba Medical College, Mangalore. The assay using 4-methylumbelliferone confirmed the deficient activity of Iduronate 2-sulfatase (4.95 nmol/4 h/ml vs control), while the enzyme assay of Arylsulfatase indicated normal results. The diagnosis was confirmed by whole genome sequencing. After a year, the patient presented with complaints of pain in the upper front teeth, mouth breathing, and swelling over the right inguinal hernia (measuring 2 × 2 cm over the umbilicus and the right inguinal region, spontaneously reducible, and expanding on the cough impulse). On physical examination, the child presented with a clumsy gait, thick coarse hair and skin, generalised facial puffiness ( ) and brittle unkempt nails. On dental examination, severe early childhood caries was identified. Despite initial attempts to guide behaviour in the dental operating room, the child, who exhibited a Frankel rating of (--), displayed high apprehension and global developmental delay. Subsequently, a collaborative approach involving multiple departments was implemented to provide treatment under general anaesthesia. The Department of Otolaryngology performed endoscopic adenoidectomy to address grade IV adenoid hypertrophy, while the Department of Paediatric Surgery performed a umbilical and right inguinal herniotomy. Oral rehabilitation (Oral prophylaxis; Pulpectomy (Metapex) with respect to FDI Tooth Number 63, 64, Glass Ionomer Cement Type IX restoration with respect to FDI Tooth Number 65, 75, 55; Indirect Pulp therapy with respect to FDI Tooth Number 51, 53, 54, 74, 84, 85 (Biodentine); Composite Restoration with respect to FDI Tooth Number 63, 51; Extraction with respect to FDI Tooth Number 61, 62; Topical Fluoride application (APF gel)), was undertaken by the Department of Pediatric and Preventive Dentistry ( ). This comprehensive interdisciplinary effort aimed to address the diverse health concerns of the child efficiently and with due consideration of their overall well-being. After the procedure, the child received a comprehensive medication regimen and was reviewed after one week. One month after the procedure, the patient reported an uneventful follow-up, with no new dental complaints. A holistic approach to subsequent oral care was diligently implemented. The personalised home oral care routine introduced a wide range of activities, including gum massage, blowing exercises, and oral motor drills, designed not only to stimulate oral sensory muscles but also to prevent potentially harmful coping mechanisms that arise from regression in daily activities. Recognising the delicate nature of the oral epithelium, the integration of topical vitamin E into twice daily oral massage was intended to promote epithelization and overall oral health. Confronting the challenge posed by the child’s inability to spit, a suite of recommendations was proposed, including the use of fluoride-free toothpaste that was safe to consume in minimal amounts, soft bristled or electric toothbrushes, routine tongue scraping, and post-meal mouth rinse facilitated by an irrigation syringe. Dietary modification, a critical aspect of this holistic care plan, advocates limiting the consumption of sweet and sticky foods , and incorporating probiotics. , The importance of regular mealtimes and family meals , further enriched the comprehensive care approach. During recall visits, the application of topical 5% sodium fluoride with casein phosphopeptide – amorphous calcium phosphate complexes (CCP-ACP) remineralisation paste was recommended every three to six months. This strategic intervention aimed to improve oral health over time. Assessment of plaque and gingival indices before and after implementation of the home care regimen revealed a significant improvement after 6 months of follow-up, underscoring improved oral quality of life. This case report underscores the effectiveness of employing a multidisciplinary approach to address the various challenges encountered during a singular intubation procedure performed under general anaesthesia for a child. The decision to adopt this approach was rooted in the goal of mitigating possible side effects related to multiple intubations, a consideration that became important given the intricacies of managing a difficult airway. This approach not only led to a shorter hospital stay, thus reducing the risk of nosocomial infections and promoting cost effectiveness, but also proved to be notably advantageous for the child, especially considering that individuals with Hunter syndrome often require frequent hospital visits due to compromised general health. However, it is crucial to recognise that this strategy poses a challenge due to the extended intubation hours, which can complicate extubation and postoperative recovery. Fortunately, this concern was effectively addressed in the present case through meticulous time management. In particular, the entire procedure was completed successfully in a time frame of four hours. The primary objective of our multidisciplinary approach was to effectively manage pain, improve oral hygiene, and mitigate degenerative potential. The collaborative procedure, conducted in coordination with the Department of Otolaryngology and involving adenoidectomy, required oral intubation due to the impracticality of nasal intubation. Unfortunately, this choice compromised the field of vision and accessibility for comprehensive dental rehabilitation. After adenoidectomy, a continuous suction process became imperative to address blood aggregation in the oral cavity and maintain a blood-free field for treatment. Inherent in individuals with Hunter syndrome is the characteristic of macroglossia. After intubation, there was significant tongue enlargement, which exacerbated the challenges related to our field of vision and presented a potential risk of airway obstruction. This situation required vigilant supervision. Additionally, an increased fragility of oral tissues was observed, which further added to the complexity of the procedure. The decision to administer general anaesthesia for dental treatment to this child came from the challenging behaviour exhibited on the dental chair. However, the successful completion of full-mouth rehabilitation helped us to focus on implementing preventive strategies. Given the diagnosis of global developmental delay, conventional oral hygiene practices were not feasible, which required the adoption of an innovative and modified preventive regimen. Structural and skeletal deformities associated with these conditions also affect the overall health and well-being of children with Hunter syndrome. Crucially, ongoing follow-up for such cases is imperative due to reported cases of associated jaw lesions, reminiscent of dentigerous cysts. , Monitoring, particularly during the second to fourth decades of life, is essential. Recognising these lesions, particularly those associated with unerupted first permanent molars containing pools of chondroitin sulphate B, is significant. Their tendency to worsen with age underscores the need for vigilance and early interventions. The key takeaway is that as paediatric dentists, a comprehensive approach is crucial, particularly for children with syndromes such as Hunter syndrome. Recognising and addressing unique challenges and collaborating across disciplines improve dental care and contribute to overall well-being. Ethical approval was waived for this case report as it does not involve experimental interventions. Patient consent for publication, including written consent for clinical details and images, was obtained from the patient’s father, with confidentiality preserved.
Comparison of HercepTest™ mAb pharmDx (Dako Omnis, GE001) with Ventana PATHWAY anti-HER-2/neu (4B5) in breast cancer: correlation with
27c7b30a-e5d9-4eea-99db-0e323fd059c7
9636083
Anatomy[mh]
The human epidermal growth factor receptor 2 (HER2, also referred to as HER2/neu) is one of four members belonging to the epidermal growth factor receptor (EGFR) protein family. The HER2 protein is characterized by its tyrosine kinase activity and the HER2 oncogene controls cell proliferation and apoptosis . Initially described in 1985 by King et al. , HER2 overexpression has been demonstrated to play a major role in the onset, development, and progression of breast cancer (BC). About 15–20% of BC patients show HER2 amplification and/or HER2 over-expression, which are associated with increased tumor aggression and poor prognosis, although these patients are eligible for HER2-directed therapy . HER2-targeted monoclonal antibodies (mAbs) such as trastuzumab and/or pertuzumab (used as single or combined agents, with or without chemotherapy) are now the standard treatment for patients with HER2-positive advanced BC, acting to block the corresponding pathway(s) and provide improved overall survival rates . Beyond the use of these two drugs, novel therapies based on anti-HER2 antibody–drug conjugates (ADCs) have been developed. For example, trastuzumab-emtansin (T-DM1, Kadcyla®) was the first of its kind approved in 2013 by European Medicines Agency (EMA) for HER2 overexpressing and/or amplified advanced metastasized BC . New types of ADCs have recently been developed using trastuzumab linked to novel toxic agents (e.g., deruxtecan, a topoisomerase I inhibitor) (T-DXd, Enhertu®) and have shown efficacy in patients even after T-DM1 therapy failure . Interestingly, there is also evidence that T-DXd is effective in patients with BC exhibiting low levels of HER2 protein as determined by IHC (i.e., HER2 IHC 2 + /non-amplified or IHC 1 +) . Since, almost 40–50% of BC are classifiable as HER2-low , many more patients may benefit from this new type of HER2-targeted therapy (reviewed in ). Methods to screen for eligible BC patients who may benefit from HER2-targeted therapies currently include IHC demonstrating HER2 protein overexpression and in situ hybridization (ISH) to detect HER2 gene amplification. Other methods such as quantitative real-time PCR (transcript amplification) are not recommended for routine patient selection . Available IHC assays are well-established, robust, and inexpensive. While several different antibody clones have been successfully used in clinical trials (e.g., R60, 10H8, and CB11), the Agilent/Dako HercepTest™ pAb pharmDx (Autostainer Link) (HercepTest (poly)) and Ventana PATHWAY® anti-HER-2/neu (4B5) (PATHWAY 4B5) are currently the most widely used IHC assays . Many studies have analyzed the diagnostic value (i.e., sensitivity and specificity) of these two IHC assays for detecting HER2-positive BC by comparing IHC results to the HER2 gene amplification status determined by ISH assays . Accordingly, international guidelines for HER2 testing in BC focus on the correlation between IHC and ISH to reliably select those HER2-positive carcinomas most likely to respond to HER2-directed therapies. Due to the potential broader applicability of current anti-HER2-targeting drugs, the sensitivity of these assays is now of greater importance for selecting eligible patients . In this context, it has become necessary to evaluate the diagnostic utility of HER2 assays with respect to the detection of not only HER2-positive (IHC 3 + and/or amplified) but also HER2-low (IHC 2 + or IHC 1 + , non-amplified) BC cases. In this context, studies comparing the polyclonal HercepTest (poly) and the monoclonal PATHWAY 4B5 have revealed good concordance between the two methods for detection of HER2-positive BC . However, there is evidence that the HercepTest (poly) might be less sensitive in detecting HER2-low BC as compared to the PATHWAY 4B5 assay . Recently, a second-generation, CE-IVD-marked HercepTest™ mAb pharmDx (Dako Omnis) kit (HercepTest (mAb)) has become available in Europe and Canada. This new assay is run on the Dako Omnis staining platform using a monoclonal rabbit antibody (clone DG44) . Interestingly, according to the 2021 NordiQC data, laboratories applying the HercepTest (mAb) achieved highest overall pass rate (100%) . Herein, we report the results of an IHC concordance study comparing the HercepTest (mAb) run on the Dako Omnis platform and the PATHWAY 4B5 assay run on the Ventana BenchMark ULTRA using a BC cohort of 119 samples and assessing assay sensitivity and specificity with respect to amplification status and inter-assay and inter-observer variations. Sample selection and study design ( Fig. ) The clinical performance of the new HercepTest (mAb) (Agilent Technologies, Santa Clara, CA. USA) was compared with the monoclonal PATHWAY 4B5 assay performance (Ventana) (Roche Tissue Diagnostics, Tucson, AZ. USA) using a selection of 120 BC samples. These commercially acquired tissue blocks were originally pre-tested for their HER2 status by the vendor using either Ventana or Leica antibodies and verified by Agilent Technologies applying HercepTest (mAb) and HercepTest (poly). The testing cohort composed of an equal number of cases representative of HER2 status 0, 1 + , 2 + , and 3 + , respectively ( n = 30/status). Within the IHC score 2 + group, 15 samples were selected to be HER2 amplified, while the remaining samples were non-amplified. Specimens were acquired by Agilent Technologies from Danish hospitals (with ethical permission) and external tissue vendors in USA and Canada (see Vendor list). The specimens were de-identified, and all were fixed in 4% neutral buffered formalin and paraffin-embedded compliant with ASCO/CAP guidelines. The BC specimens were enrolled in the study following assessment of tissue sections stained with H&E, HercepTest (mAb), and HercepTest (poly). A specimen was included if (1) invasive BC tissue and an adequate number of tumor cells (≥ 100) were present, (2) the tissue morphology was adequately preserved, and (3) there was an absence of processing artifacts that would negatively affect the assessment of the HER2 status. Each specimen entered the study with an enrollment IHC score based on HercepTest (mAb) and HercepTest (poly). FISH status for enrollment of amplified and non-amplified specimens was based on information previously provided by the commercial tissue vendor, if these data were available. One tumor sample had an inadequate amount of tumor tissue and was rejected by the observing pathologists; hence, a total of 119 BC specimens were used for this study. The final selection of samples consisted of 103 surgical resections and 16 biopsy specimens. Tumor types included 106 ductal (89.1%), 9 lobular (7.6%), and 4 mucinous (3.4%) carcinomas. IHC results were assigned to each of these 119 samples. HER2 FISH analysis revealed 114 evaluable samples out of 119 tested (see also Supplementary Data ); five of the BC samples produced non-evaluable FISH signals due to sub-optimal tissue pre-analytics despite repeat testing. Sample preparation Twelve tissue sections, 4–5 µm thick, cut from each of the selected specimens were mounted onto Epredia™ SuperFrost Plus™ Microscope Slides. On-slide controls containing HER2-positive (FFPE cell pellet from IHC 2 + cell line MDA-453) and negative (tonsil sample) cores were added to each slide. Mounted tissues were baked at 60 °C for 1 h. Two tissue sections (first and last) from each collected specimen were H&E stained. Immunohistochemistry HercepTest™ mAb pharmDx (Dako Omnis) (GE001) The IHC staining protocol using the HercepTest (mAb) was performed as described by the manufacturer . Freshly cut tissue was processed on the Dako Omnis platform (Agilent Technologies, Santa Clara, CA) together with kit control slides for every staining run, using an automated staining protocol validated for HER2 detection . PATHWAY® anti-HER-2/NEU, clone 4B5 (790–2991) IHC staining using the PATHWAY® HER-2/neu rabbit monoclonal antibody 4B5 was performed according to the recommendations of the manufacturer . Freshly cut tissue was processed on the Ventana BenchMark ULTRA (Ventana Medical Systems, Roche Diagnostics, Tucson, AZ) together with kit control slides for every staining run, using an automated staining protocol validated for HER2 detection . IHC scoring IHC staining for HER2 was independently evaluated by three trained pathologists (IN, MK, JR), followed by a consensus session for discordantly scored samples to define a consensus score for each case. IHC stains of the two assays were read after a 2-week wash-out period, and all the pathologists were blinded to the FISH results. In addition to a pre-study training provided by Dako/Agilent, all investigators had extensive experience in HER2 evaluation, having served over the past 20 years as readers in most of the trastuzumab, pertuzumab, and T-DM1 approval BC studies by Targos GmbH (Kassel, Germany) (for a review of studies screened by first-generation HercepTest (poly), see ). IHC scoring was performed according to the 2018 ASCO/CAP guidelines . Accordingly, cases with complete intense staining in ≤ 10% of tumor cells, as well as cases with intense and lateral or basolateral (“U-type”) staining, were included in the IHC 2 + category. For cases of IHC 1 + staining intensity (i.e., faint/barely perceptible membrane staining), the percentage of stained cells ≤ 10% or > 10% was recorded separately according to Ventana Instructions for Use (IFU) . Intensity scoring was performed by applying the magnification rule as published previously by our group . Fluorescence in situ hybridization assessment HER2 IQFISH pharmDx (K5731) Determination of HER2 gene amplification was analyzed using the HER2 IQFISH pharmDx kit according to the recommendations of the manufacturer . HER2 in situ hybridizations were evaluated by a pathologist (IN) using the updated 2018 ASCO/CAP guidelines. For final interpretation of the FISH data, newly defined ISH groups (1–5) were taken into consideration . Accordingly, group 1 (ratio ≥ 2.0 and gene count ≥ 4.0) and group 3 cases (ratio < 2.0 and gene count ≥ 6.0) with IHC 3 + or IHC 2 + were considered FISH positive. Statistical evaluations For comparison of datasets, chi-square test ( X 2 ) was used with p < 0.05 considered as statistically significant. Test performance was evaluated using FISH as a reference standard. Sensitivity and specificity were calculated as follows: [12pt]{minimal} $$=100 }{\#+\#}$$ Estimated sensitivity = 100 % × # true positive events # true positive events + # false negative events [12pt]{minimal} $$=100 }{\#+\#}$$ Estimated specificity = 100 % × # true negative events # false positive events + # true negative events Inter-rater reliability (IRR), defined as the ratio of the total number of agreements among raters and the total number of ratings, was calculated as follows: [12pt]{minimal} $$[]=\#}{\#*\#} 100$$ IRR % = Total # of agreements Total # of ratings given by each rater ∗ # of raters × 100 Fig. ) The clinical performance of the new HercepTest (mAb) (Agilent Technologies, Santa Clara, CA. USA) was compared with the monoclonal PATHWAY 4B5 assay performance (Ventana) (Roche Tissue Diagnostics, Tucson, AZ. USA) using a selection of 120 BC samples. These commercially acquired tissue blocks were originally pre-tested for their HER2 status by the vendor using either Ventana or Leica antibodies and verified by Agilent Technologies applying HercepTest (mAb) and HercepTest (poly). The testing cohort composed of an equal number of cases representative of HER2 status 0, 1 + , 2 + , and 3 + , respectively ( n = 30/status). Within the IHC score 2 + group, 15 samples were selected to be HER2 amplified, while the remaining samples were non-amplified. Specimens were acquired by Agilent Technologies from Danish hospitals (with ethical permission) and external tissue vendors in USA and Canada (see Vendor list). The specimens were de-identified, and all were fixed in 4% neutral buffered formalin and paraffin-embedded compliant with ASCO/CAP guidelines. The BC specimens were enrolled in the study following assessment of tissue sections stained with H&E, HercepTest (mAb), and HercepTest (poly). A specimen was included if (1) invasive BC tissue and an adequate number of tumor cells (≥ 100) were present, (2) the tissue morphology was adequately preserved, and (3) there was an absence of processing artifacts that would negatively affect the assessment of the HER2 status. Each specimen entered the study with an enrollment IHC score based on HercepTest (mAb) and HercepTest (poly). FISH status for enrollment of amplified and non-amplified specimens was based on information previously provided by the commercial tissue vendor, if these data were available. One tumor sample had an inadequate amount of tumor tissue and was rejected by the observing pathologists; hence, a total of 119 BC specimens were used for this study. The final selection of samples consisted of 103 surgical resections and 16 biopsy specimens. Tumor types included 106 ductal (89.1%), 9 lobular (7.6%), and 4 mucinous (3.4%) carcinomas. IHC results were assigned to each of these 119 samples. HER2 FISH analysis revealed 114 evaluable samples out of 119 tested (see also Supplementary Data ); five of the BC samples produced non-evaluable FISH signals due to sub-optimal tissue pre-analytics despite repeat testing. Twelve tissue sections, 4–5 µm thick, cut from each of the selected specimens were mounted onto Epredia™ SuperFrost Plus™ Microscope Slides. On-slide controls containing HER2-positive (FFPE cell pellet from IHC 2 + cell line MDA-453) and negative (tonsil sample) cores were added to each slide. Mounted tissues were baked at 60 °C for 1 h. Two tissue sections (first and last) from each collected specimen were H&E stained. HercepTest™ mAb pharmDx (Dako Omnis) (GE001) The IHC staining protocol using the HercepTest (mAb) was performed as described by the manufacturer . Freshly cut tissue was processed on the Dako Omnis platform (Agilent Technologies, Santa Clara, CA) together with kit control slides for every staining run, using an automated staining protocol validated for HER2 detection . PATHWAY® anti-HER-2/NEU, clone 4B5 (790–2991) IHC staining using the PATHWAY® HER-2/neu rabbit monoclonal antibody 4B5 was performed according to the recommendations of the manufacturer . Freshly cut tissue was processed on the Ventana BenchMark ULTRA (Ventana Medical Systems, Roche Diagnostics, Tucson, AZ) together with kit control slides for every staining run, using an automated staining protocol validated for HER2 detection . IHC scoring IHC staining for HER2 was independently evaluated by three trained pathologists (IN, MK, JR), followed by a consensus session for discordantly scored samples to define a consensus score for each case. IHC stains of the two assays were read after a 2-week wash-out period, and all the pathologists were blinded to the FISH results. In addition to a pre-study training provided by Dako/Agilent, all investigators had extensive experience in HER2 evaluation, having served over the past 20 years as readers in most of the trastuzumab, pertuzumab, and T-DM1 approval BC studies by Targos GmbH (Kassel, Germany) (for a review of studies screened by first-generation HercepTest (poly), see ). IHC scoring was performed according to the 2018 ASCO/CAP guidelines . Accordingly, cases with complete intense staining in ≤ 10% of tumor cells, as well as cases with intense and lateral or basolateral (“U-type”) staining, were included in the IHC 2 + category. For cases of IHC 1 + staining intensity (i.e., faint/barely perceptible membrane staining), the percentage of stained cells ≤ 10% or > 10% was recorded separately according to Ventana Instructions for Use (IFU) . Intensity scoring was performed by applying the magnification rule as published previously by our group . The IHC staining protocol using the HercepTest (mAb) was performed as described by the manufacturer . Freshly cut tissue was processed on the Dako Omnis platform (Agilent Technologies, Santa Clara, CA) together with kit control slides for every staining run, using an automated staining protocol validated for HER2 detection . IHC staining using the PATHWAY® HER-2/neu rabbit monoclonal antibody 4B5 was performed according to the recommendations of the manufacturer . Freshly cut tissue was processed on the Ventana BenchMark ULTRA (Ventana Medical Systems, Roche Diagnostics, Tucson, AZ) together with kit control slides for every staining run, using an automated staining protocol validated for HER2 detection . IHC staining for HER2 was independently evaluated by three trained pathologists (IN, MK, JR), followed by a consensus session for discordantly scored samples to define a consensus score for each case. IHC stains of the two assays were read after a 2-week wash-out period, and all the pathologists were blinded to the FISH results. In addition to a pre-study training provided by Dako/Agilent, all investigators had extensive experience in HER2 evaluation, having served over the past 20 years as readers in most of the trastuzumab, pertuzumab, and T-DM1 approval BC studies by Targos GmbH (Kassel, Germany) (for a review of studies screened by first-generation HercepTest (poly), see ). IHC scoring was performed according to the 2018 ASCO/CAP guidelines . Accordingly, cases with complete intense staining in ≤ 10% of tumor cells, as well as cases with intense and lateral or basolateral (“U-type”) staining, were included in the IHC 2 + category. For cases of IHC 1 + staining intensity (i.e., faint/barely perceptible membrane staining), the percentage of stained cells ≤ 10% or > 10% was recorded separately according to Ventana Instructions for Use (IFU) . Intensity scoring was performed by applying the magnification rule as published previously by our group . in situ hybridization assessment HER2 IQFISH pharmDx (K5731) Determination of HER2 gene amplification was analyzed using the HER2 IQFISH pharmDx kit according to the recommendations of the manufacturer . HER2 in situ hybridizations were evaluated by a pathologist (IN) using the updated 2018 ASCO/CAP guidelines. For final interpretation of the FISH data, newly defined ISH groups (1–5) were taken into consideration . Accordingly, group 1 (ratio ≥ 2.0 and gene count ≥ 4.0) and group 3 cases (ratio < 2.0 and gene count ≥ 6.0) with IHC 3 + or IHC 2 + were considered FISH positive. Determination of HER2 gene amplification was analyzed using the HER2 IQFISH pharmDx kit according to the recommendations of the manufacturer . HER2 in situ hybridizations were evaluated by a pathologist (IN) using the updated 2018 ASCO/CAP guidelines. For final interpretation of the FISH data, newly defined ISH groups (1–5) were taken into consideration . Accordingly, group 1 (ratio ≥ 2.0 and gene count ≥ 4.0) and group 3 cases (ratio < 2.0 and gene count ≥ 6.0) with IHC 3 + or IHC 2 + were considered FISH positive. For comparison of datasets, chi-square test ( X 2 ) was used with p < 0.05 considered as statistically significant. Test performance was evaluated using FISH as a reference standard. Sensitivity and specificity were calculated as follows: [12pt]{minimal} $$=100 }{\#+\#}$$ Estimated sensitivity = 100 % × # true positive events # true positive events + # false negative events [12pt]{minimal} $$=100 }{\#+\#}$$ Estimated specificity = 100 % × # true negative events # false positive events + # true negative events Inter-rater reliability (IRR), defined as the ratio of the total number of agreements among raters and the total number of ratings, was calculated as follows: [12pt]{minimal} $$[]=\#}{\#*\#} 100$$ IRR % = Total # of agreements Total # of ratings given by each rater ∗ # of raters × 100 Performance of HercepTest (mAb) and inter-rater agreement ( Fig. ) In HER2-expressing samples, each of the HER2 IHC assays produced specific membrane-bound staining that was easy to interpret at all intensities (weak to strong). Although non-specific background staining was not observed, a weak and only focally distributed staining of normal duct cells was detected with HercepTest (mAb) (Fig. , ). Furthermore, PATHWAY 4B5 staining was characterized by the occasional presence of diffuse and/or dot-like cytoplasmic staining in tumor and normal cells, as previously reported . Signal detection in normal duct samples was usually of low intensity (Fig. , ). Noteworthy, we did not observe relevant staining differences between the sample types, e.g., no higher frequency of edge artifacts in biopsies. Within the HercepTest (mAb) and the PATHWAY 4B5 assays, an overall inter-reader agreement of 84% (100/119) and of 89.1% (106/119) was observed. Study IRR was recorded as 89.4% and 92.7%, respectively. Discrepantly scored samples were re-evaluated by all three observers during a final review session and assigned consensus scores that were used for further analyses. It is noteworthy that most disagreements (68.8%) between pathologists’ scores were observed within the HER2-low range (later consented as IHC score 0 or 1 +), especially near the cut-off for HER2 ultra-low category exhibiting a HER2 score of 0 with incomplete and faint staining in ≤ 10% of tumor cells. This led to several challenging samples around the cut-off value (IHC 1 + versus IHC 0, according to ASCO/CAP 2018). HercepTest (mAb) and PATHWAY 4B5 — inter-assay concordance ( Table ) Based on the consented scores for both assays, complete concordance was reached in 83 of 119 tumors (69.7%). Thirty-six samples received discordant scores, including 26 resections (25.2%) and 10 biopsies (62.5%). Virtually, all these cases ( n = 35) showed higher scores with HercepTest (mAb), and in only one case (biopsy) was the staining recorded to be higher by PATHWAY 4B5. While 56 samples were evaluated as negative (IHC 0) for HER2 by PATHWAY 4B5, only 35 specimens were likewise identified by HercepTest (mAb). Thus, adjustments to discordant scores were mainly associated with the PATHWAY 4B5 negative group of IHC 0 and IHC 1 + (33 of 36). This led to a significantly different classification of BC samples by both assays. For instance, the group of HER2-low expressing samples (HER2 score 2 + or 1 + /not amplified) was found to be significantly larger for HercepTest (mAb) (35% versus 19%; p < 0.01). Only three of the discordant cases were observed in the PATHWAY 4B5 IHC 2 + and IHC 3 + group, with scores for two tumors being raised from IHC 2 + to 3 + , and one score downgraded from IHC 3 + to 2 + . Lastly, the concordance of both assays was found to be 83.7% (87/104 cases) for HER2-negative (IHC 0/1 +) versus HER2-positive (IHC 3 +). HercepTest (mAb) and PATHWAY 4B5 — correlation with FISH ( Fig. ) FISH data were obtained for 114 specimens, 42 of which showed HER2 amplification (Fig. ). All non-amplified cases ( n = 72) were identified as IHC negative (0/1 +) or equivocal (2 +) by both assays, i.e., no false positives were recorded, corresponding to 100% specificity. However, two false negatives were observed with the PATHWAY 4B5 assay in which two amplified (surgical) specimens showed an IHC 1 + score compared to a IHC 2 + score with the HercepTest (mAb), leading to a slightly lower sensitivity for PATHWAY 4B5 (95.2% versus 100%; Fig. ). Both cases were tested amplified according to the external vendor information as well as within this study. Although more IHC 2 + cases were identified by the HercepTest (mAb) as being not amplified (14 of 27) compared to PATHWAY 4B5 (3 of 15), all the amplified tumors were detected as positive (IHC 2 + or 3 +) when using the HercepTest (mAb) (see Figs. and ). A more detailed analysis of FISH data was conducted with respect to ISH groups according to ASCO/CAP 2018 guidance . Compared to PATHWAY 4B5, scores for 13 cases were increased to IHC 2 + when using HercepTest (mAb) ( n = 10 from IHC 1 + and n = 3 from IHC 0; see Table ). In two tumors, FISH revealed a HER2 ratio ≥ 2.0 and mean gene count per cell ≥ 4.0, corresponding to ISH group 1 (HER2 positive; see Fig. and Table ; sample nos. 86 and 116). In addition, four tumors with ratios ≤ 2.0 exhibited increased HER2 gene counts between ≥ 4 and < 6, corresponding to ISH group 4 (Table : samples 56, 78, 103, and 109). In these cases, HER2 amplification status should have been considered questionable and been reported as negative, with a comment about the uncertainty of a response to HER2-targeted drug therapy available at the time of guidance (i.e., 2018). HercepTest (mAb) and PATHWAY 4B5 — correlation with HER2-low status ( Fig. ) Since the development of novel HER2-directed drugs may benefit BC patients with low levels of HER2 expression (IHC 2 + /non-amplified and IHC 1 +) , the assay data were further analyzed with respect to their sensitivity and specificity for detecting HER2-low tumors (Fig. ). Out of 41 tumors (all non-amplified) that were determined to be completely negative by PATHWAY 4B5, only 19 cases (46.3%) showed no staining when using HercepTest (mAb), corresponding to the more strictly defined IHC 0 category using the Ventana score algorithm. In the remaining 22 cases, the HercepTest (mAb) stained at least some tumor cells, with approximately one-third of these cases belonging to the HER2-low group (7 × IHC 1 + , 1 × IHC 2 + non-amplified) and 14 cases in the HER2 “ultra-low” group with < 10% stained tumor cells (see also Fig. , marked in grey), thus highlighting the high sensitivity of the HercepTest (mAb) used in this study. Fig. ) In HER2-expressing samples, each of the HER2 IHC assays produced specific membrane-bound staining that was easy to interpret at all intensities (weak to strong). Although non-specific background staining was not observed, a weak and only focally distributed staining of normal duct cells was detected with HercepTest (mAb) (Fig. , ). Furthermore, PATHWAY 4B5 staining was characterized by the occasional presence of diffuse and/or dot-like cytoplasmic staining in tumor and normal cells, as previously reported . Signal detection in normal duct samples was usually of low intensity (Fig. , ). Noteworthy, we did not observe relevant staining differences between the sample types, e.g., no higher frequency of edge artifacts in biopsies. Within the HercepTest (mAb) and the PATHWAY 4B5 assays, an overall inter-reader agreement of 84% (100/119) and of 89.1% (106/119) was observed. Study IRR was recorded as 89.4% and 92.7%, respectively. Discrepantly scored samples were re-evaluated by all three observers during a final review session and assigned consensus scores that were used for further analyses. It is noteworthy that most disagreements (68.8%) between pathologists’ scores were observed within the HER2-low range (later consented as IHC score 0 or 1 +), especially near the cut-off for HER2 ultra-low category exhibiting a HER2 score of 0 with incomplete and faint staining in ≤ 10% of tumor cells. This led to several challenging samples around the cut-off value (IHC 1 + versus IHC 0, according to ASCO/CAP 2018). Table ) Based on the consented scores for both assays, complete concordance was reached in 83 of 119 tumors (69.7%). Thirty-six samples received discordant scores, including 26 resections (25.2%) and 10 biopsies (62.5%). Virtually, all these cases ( n = 35) showed higher scores with HercepTest (mAb), and in only one case (biopsy) was the staining recorded to be higher by PATHWAY 4B5. While 56 samples were evaluated as negative (IHC 0) for HER2 by PATHWAY 4B5, only 35 specimens were likewise identified by HercepTest (mAb). Thus, adjustments to discordant scores were mainly associated with the PATHWAY 4B5 negative group of IHC 0 and IHC 1 + (33 of 36). This led to a significantly different classification of BC samples by both assays. For instance, the group of HER2-low expressing samples (HER2 score 2 + or 1 + /not amplified) was found to be significantly larger for HercepTest (mAb) (35% versus 19%; p < 0.01). Only three of the discordant cases were observed in the PATHWAY 4B5 IHC 2 + and IHC 3 + group, with scores for two tumors being raised from IHC 2 + to 3 + , and one score downgraded from IHC 3 + to 2 + . Lastly, the concordance of both assays was found to be 83.7% (87/104 cases) for HER2-negative (IHC 0/1 +) versus HER2-positive (IHC 3 +). Fig. ) FISH data were obtained for 114 specimens, 42 of which showed HER2 amplification (Fig. ). All non-amplified cases ( n = 72) were identified as IHC negative (0/1 +) or equivocal (2 +) by both assays, i.e., no false positives were recorded, corresponding to 100% specificity. However, two false negatives were observed with the PATHWAY 4B5 assay in which two amplified (surgical) specimens showed an IHC 1 + score compared to a IHC 2 + score with the HercepTest (mAb), leading to a slightly lower sensitivity for PATHWAY 4B5 (95.2% versus 100%; Fig. ). Both cases were tested amplified according to the external vendor information as well as within this study. Although more IHC 2 + cases were identified by the HercepTest (mAb) as being not amplified (14 of 27) compared to PATHWAY 4B5 (3 of 15), all the amplified tumors were detected as positive (IHC 2 + or 3 +) when using the HercepTest (mAb) (see Figs. and ). A more detailed analysis of FISH data was conducted with respect to ISH groups according to ASCO/CAP 2018 guidance . Compared to PATHWAY 4B5, scores for 13 cases were increased to IHC 2 + when using HercepTest (mAb) ( n = 10 from IHC 1 + and n = 3 from IHC 0; see Table ). In two tumors, FISH revealed a HER2 ratio ≥ 2.0 and mean gene count per cell ≥ 4.0, corresponding to ISH group 1 (HER2 positive; see Fig. and Table ; sample nos. 86 and 116). In addition, four tumors with ratios ≤ 2.0 exhibited increased HER2 gene counts between ≥ 4 and < 6, corresponding to ISH group 4 (Table : samples 56, 78, 103, and 109). In these cases, HER2 amplification status should have been considered questionable and been reported as negative, with a comment about the uncertainty of a response to HER2-targeted drug therapy available at the time of guidance (i.e., 2018). Fig. ) Since the development of novel HER2-directed drugs may benefit BC patients with low levels of HER2 expression (IHC 2 + /non-amplified and IHC 1 +) , the assay data were further analyzed with respect to their sensitivity and specificity for detecting HER2-low tumors (Fig. ). Out of 41 tumors (all non-amplified) that were determined to be completely negative by PATHWAY 4B5, only 19 cases (46.3%) showed no staining when using HercepTest (mAb), corresponding to the more strictly defined IHC 0 category using the Ventana score algorithm. In the remaining 22 cases, the HercepTest (mAb) stained at least some tumor cells, with approximately one-third of these cases belonging to the HER2-low group (7 × IHC 1 + , 1 × IHC 2 + non-amplified) and 14 cases in the HER2 “ultra-low” group with < 10% stained tumor cells (see also Fig. , marked in grey), thus highlighting the high sensitivity of the HercepTest (mAb) used in this study. Accurate assessment of HER2 status is of utmost importance for patient selection and the determination of those eligible for HER2-directed therapy. Test kits approved by the FDA have been introduced to minimize HER2 testing variability and are now recommended for use by ASCO/CAP . To the best of our knowledge, this is the first study comparing the technical and diagnostic performance of the new HercepTest (mAb) with the well-established Ventana PATHWAY 4B5 test kit. The original manual HercepTest (poly) was approved in 1998 by the FDA for assessing the eligibility of BC patients to receive trastuzumab antibody therapy. Recent reports, however, demonstrated higher specificity and sensitivity for alternative HER2 IHC assays compared to HercepTest (poly) for Autostainer . To meet these challenges, a new HercepTest (mAb) pharmDx kit was developed that uses a licensed monoclonal antibody (clone DG44) produced with a patented process by Epitomics Inc. (an Abcam company). IHC detection of HER2 using the new HercepTest (mAb) pharmDx kit is performed on the fully automated Dako Omnis staining device. The PATHWAY 4B5 was also run on an automated staining system (i.e., the Ventana Benchmark ULTRA); however, HercepTest (mAb) performed on Dako Omnis platform, using the newly invented “dynamic gap staining technology” (reviewed in ), was observed to provide slightly more consistent staining as indicated by lower numbers of repetitions and fewer staining artifacts (e.g., patchy staining, edge effects, and air bubbles; see Fig. ). In general, IHC staining of the HercepTest (mAb) assay was characterized by distinct and sharp detection of HER2, with low to no background and/or non-specific signal detection (see also Fig. ). Dot-like cytoplasmic staining, with or without basal membrane staining as outlined by Ventana IFU for PATHWAY 4B5, was not observed with HercepTest (mAb) in our sample series. However, a partial, mostly weak staining of normal epithelium could be seen in some samples, but was associated neither with the HER2 protein level of tumor tissue nor with false positive immunoreactions, i.e., IHC 3 + and FISH negatives as described in some cases previously for polyclonal HercepTest (poly) . Instead, the comparison with FISH data demonstrated the opposite, with 100% concordance between HercepTest (mAb) and amplification status for cases scored as 0, 1 + , or 3 + by HercepTest (mAb). Two false negatives were observed using the PATHWAY 4B5 assay (1 + by PATHWAY 4B5 IHC but FISH positive), resulting in a reduced sensitivity. In this context, it should be noted that previous studies frequently reported a certain number of tumors without HER2 protein overexpression (IHC score 0 or 1 +) but being HER2 gene amplified . While commonly considered as cases with DNA-uncoupled synthesis of HER2 protein, it might be of interest to confirm IHC score with the apparently more sensitive HercepTest (mAb). Notably, the HercepTest (mAb) assay generated a significantly higher rate of equivocal cases (30 versus 15; see Table ). Based on the 114 cases with available FISH data, 51.8% (14/27) were non-amplified by FISH compared to 20% (3/15) by PATHWAY 4B5. It may be argued then that HercepTest (mAb) could result in increased costs and delayed diagnosis due to an increased rate of reflex FISH testing in clinical practice. However, in light of the updated and more focused ASCO/CAP HER2 testing guideline that defines five diagnostic ISH groups , an in-depth analysis of FISH data revealed an additional four tumors identified by HercepTest (mAb) as IHC 2 + had increased gene counts (between ≥ 4.0 and < 6.0) and a ratio < 2.0, due to polysomy in three cases. These tumors correspond to ASCO/CAP ISH group 4 and would have been classified as negative by PATHWAY 4B5 testing. The clinical implication for these patients is still not clear. The prevalence of this group in different studies varies from 1.9% to 14.2% and has been classified as “equivocal” in ASCO/CAP 2013 guidelines. Since 2018 , these cases should be reported as HER2-negative with an associated comment describing the uncertainty about patient response to HER2-directed therapies available at the time of guidance. Meanwhile, novel HER2-directed drugs such as T-DXd have been developed using a new generation of ADCs . In contrast to the first approved ADC (T-DM1) for which therapy effectiveness is still dependent on the demonstration of HER2-positive tumors (IHC3 + and/or ISH amplified), T-DXd was beneficial even after Kadcyla therapy . Interestingly, there is increasing evidence that patients with HER2-low BC (HER2 2 + /non-amplified or IHC 1 + , according to ASCO/CAP 2018) also benefit from T-DXd. These new developments in HER2-targeted BC therapy have implications for both testing and the definition of HER2 sensitivity and specificity . Our data demonstrate a higher detection rate of HER2 amplified breast carcinomas (100% versus 95%) by the HercepTest (mAb) compared to the Ventana PATHWAY 4B5 assay. In addition, the number of HER2-low expressing samples was markedly increased by using HercepTest (mAb) (35% versus 19%). In the upcoming era of HER2-targeted therapies administered to HER2-low BC patients , both observations would significantly increase the number of patients eligible to HER2-directed therapies. These promising results have already raised much interest in the scientific community focusing on the assessment of HER2-low BC in future clinical diagnostics . Recent clinical trials using the HER2-directed antibody–drug conjugate T-DXd have already included patients exhibiting either IHC 1 + or 2 + /HER2 non-amplified in their HER2-low group (e.g., DB02 [NCT03523585]) or very low (“ultra-low”) HER2-expressing cohorts (HER2 IHC 0 < 1 + , weak staining in less than 10% of tumor cells, e.g., DB06 [NCT04494425]) eligible for therapy. Most recently, a large T-DXd phase III trial (DB-04) turned out to be effective in HER2-low metastatic breast cancer . Thus far, these studies are based on expression analysis using the PATHWAY 4B5 antibody clone. As demonstrated in this study, the increased sensitivity of HercepTest (mAb) may allow inclusion of more patients in clinical trials, specifically by enrolling patients with HER2-low and ultra-low expression and allow the investigation of clinical response rate and outcome in these cohorts. Another implication of testing HER2-low category of patients in this study is the accuracy of HER2 interpretation within this tumor group. Inter-rater variability was mostly restricted to the discordant assessment of HER2 0/1 + cases near the cut-off value. Future HER2 scoring will need to include more training for the HER2-low category of patients, and ASCO/CAP may refine their guidelines appropriately. Recently, French GEFPICS group published the first national recommendation for HER2 status evaluation in breast cancer with emphasis on the HER2-low concept underlining the need for harmonized testing guidelines . Finally, we regard the results of this study as representative for all HER2 scores including the recently delineated HER2 low category as the carefully pre-selected case series representing the entire range of IHC scores and amplification levels, including different ISH subgroups . Accordingly, about 35% of cases were HER2-positive (IHC 3 + or IHC 2 + amplified) belonging to ISH group 1 ( n = 40) and group 3 ( n = 2). In the remaining non-amplified cases, the accuracy of assays was determined both with respect to detection of HER2-negative versus HER2-positive and considering HER2-low (IHC 1 + and IHC 2 + /non-amplified) and HER2-ultra-low (IHC 0 < 1 +) tumors. Therefore, our comparative study of HercepTest (mAb) with PATHWAY 4B5 addresses the main challenges that may arise during HER2 testing in BC, particularly with consideration to the emerging anti-HER2-directed drugs and patients with lower HER2 expression. However, determining the predictive value of new HercepTest (mAb) clinical trials using this new assay is of crucial importance since more sensitive tests may not necessarily be the best predictors of response to targeted therapy. In conclusion, while both IHC assays are highly suitable for the detection of HER2 protein in BC samples, fewer assay-related failures (e.g., staining artifacts) were observed using HercepTest (mAb) Dako Omnis. The data demonstrated that HercepTest (mAb) exhibited both high specificity (100%) and high sensitivity (100%), which could be critical in patient selection for new HER2-targeting treatment options. Future studies will demonstrate whether this new assay has the capacity to provide better patient stratification, leading to better patient response rates and clinical outcomes. Below is the link to the electronic supplementary material. Supplementary file1 (XLSX 19 KB)
Income-Based analysis of health security in Western Asia through an integrated GHSI, MCDM, and Clustering Model
91e01a13-5221-498d-b7c4-dae590b40bf2
11923535
Medicine[mh]
In today’s interconnected world, infectious diseases threaten global security, as seen with COVID-19, Ebola, and tuberculosis. These outbreaks disrupt economies, overwhelm healthcare, and affect vulnerable populations ( ; ). Surveillance data emphasize the rapid cross-border spread of disease ( ), highlighting urgent gaps in preparedness and the need for unified international response strategies ( ). Enhancements in health security not only bolster a nation’s ability to respond to health crises but also promote global collaboration and trust among countries ( ). By improving surveillance systems, health infrastructure, and access to vaccines, nations can detect and manage outbreaks more effectively ( ). Western Asia emerges as a uniquely challenging region for health security, shaped by its intricate blend of geopolitical tensions, diverse populations, and varying health systems ( ). With a history of political conflict and large-scale migration, the region requires robust health strategies to ensure accessible healthcare across borders ( ; ). The effectiveness of health security systems here depends heavily on the region’s ability to monitor and respond swiftly to health crises, including infectious disease threats. Cross-border cooperation, resilient health infrastructure, equitable resource allocation, coordinated data-driven health security strategies, and policy interventions are essential pillars for addressing health risks ( ). Therefore, evaluation and comparing Western Asia’s health security practices, identifying priorities, and setting improvement directions are critical for effectively managing public health across this complex region. The Global Health Security Index (GHSI) serves as a crucial tool for assessing national health security, encompassing various aspects such as prevention, detection, and swift response capabilities ( ; ). In this paper, the first stage was analysing the current literature on the topic of study. Recent studies have made considerable progress in analysing and enhancing this index as a means to evaluate health security and public health resilience across countries, with numerous investigations highlighting its significance in shaping both national policies and global health security initiatives. For example, Boyd et al. conducted a study in examining the efficacy of the GHSI in measuring 195 nations’ readiness for biological threats, with a focus on how it might bolster future preparedness efforts. investigated the connection between Global Health Security (GHS) and Universal Health Coverage (UHC) indices by utilizing Pearson’s correlation coefficient to measure the relationship between these two crucial global health metrics. That same year, explored the role of community health workers in reinforcing resilient health systems and health security. To demonstrate the need for additional funding, delved into the risks and threats of infectious diseases in relation to the current political and socioeconomic contexts affecting GHS. analysed the 2021 Global Health Security (GHS) Index Report, which indicates minimal improvement in global health security preparedness despite emphasizing the necessity to enhance disease surveillance worldwide, particularly in remote rural areas of low-income nations. A practical approach using Kenya as a case study was suggested. In 2020, Ravi et al. explored the practical applications of the GHS Index, potential uses to assist both professionals and policymakers in maximizing the tool’s utility, and the importance of ratings and rankings. employed indirect age normalization to assess the GHSI in order to examine the relationship between comparative mortality ratios of excess COVID-19 deaths and pandemic preparedness at the national level. studied the correlation between socioeconomic and demographic factors and COVID-19 mortality and morbidity rates in various countries. They also ranked the nations according to their COVID-19 rates. In , Dobrovolska et al. utilized economic and mathematical modelling to examine the relationships between the GHSI and the Global Cyber security Index (GCSI) across 190 countries, with their findings underscoring the interdependencies between these indexes and illuminating potential synergies between health and cyber security. Concentrating on island nations, investigated the connection between GHSI scores, excess mortality, and GDP per capita growth during the COVID-19 pandemic, underscoring the impact of health security on macroeconomic outcomes, especially in geographically isolated countries. Moreover, examined correlations between this index and MPOX case rates and discovered that high GHSI scores contribute to the early detection and response to outbreaks, reinforcing the index’s value in infectious disease preparedness. However, while all those studies emphasized that the GHSI plays a significant role in assessing global preparedness and shaping both national policies and global health security initiatives, it exhibits several limitations, particularly when compared to more sophisticated tools such as multi-criteria decision-making (MCDM) models and machine learning clustering techniques. One of its key shortcomings is the insufficient weighting of health security factors, which can lead to inaccurate rankings ( ). Additionally, the absence of advanced ranking and clustering methods restricts the ability to differentiate between various subgroups, such as conflict versus non-conflict nations and countries with different income levels within the same region. These limitations, along with the lack of detailed comparative analysis or dynamic ranking clustering of nations, underscore the need for more nuanced methodologies that offer deeper insights into regional performance differences, hampering a comprehensive understanding of regional health security landscapes ( ). To address these challenges, a more sophisticated framework that applies advanced methodologies and provides deeper insights into regional variation is essential. Several studies have made significant contributions to this regard. For instance, applied the fuzzy analytic hierarchy process algorithm to analyse the Global One Health Index in 160 countries and territories worldwide. Five additional studies leveraged multi-criteria decision-making (MCDM) methods to assess the 2019 and 2019 GHSI. , employing modified VIKOR and CRITIC weighting, found its rankings diverged from other MCDM methods, suggesting VIKOR captures unique aspects of health system performance. , using MAIRCA on EU member states, revealed strong correlations between GHSI scores and most other MCDM methods, except MAUT, indicating a broad agreement on relative health security performance. , utilizing PROMETHEE II and SMAA, coupled with a clustering approach, revealed inconsistencies between GHSI classifications and observed COVID-19 performance, highlighting potential flaws in the GHSI’s predictive capabilities and advocating for its revision. , proposed an entropy TOPSIS k-means clustering approach for ranking and clustering African countries’ health security practices. applied entropy-VIKOR-K means clustering methods to determine and compare the relative importance of health security indicators in the EMR region in both 2021 and 2019 and to rank cluster countries based on their overall performance. This study also utilized Pearson’s and Spearman’s rank correlation coefficients to assess the relationships between the indicators and overall performance. However, in addition to the previously defined limitations of the traditional statistical methods used by the GHSI, these studies found limited applications of the GHSI in global health security assessment and emphasized the need for new evaluation methodologies ( ; ; ). confirmed that methodologies that integrate MCDM with GHSI are very limited in the literature and have been used to a limited extent for the purpose of proposing new methodologies and ways to evaluate and rank countries’ performance, which is also in line with the findings of limited sophisticated ranking and clustering analyses that restrict the ability to compare health security across regions ( ). In summary, although numerous studies have analyzed health security performance through diverse methodologies, none have integrated income classification, the Global Health Security Index (GHSI), Distance Correlation-based Criteria Importance through Inter-criteria Correlation (D-CRITIC) weighting, the Combined Compromise Solution (CoCoSo), and K-means clustering techniques specifically to examine health security in the Western Asian region. This study presents a novel approach as it introduces an innovative framework for analysing health security in Western Asia, integrating the Global Health Security Index (GHSI), income-based country classification, advanced multi-criteria decision-making (MCDM) methods, specifically D-CRITIC weighting, CoCoSo ranking, and K-means clustering. While prior research has explored health security performance using various approaches, no study has applied this specific combination, which enhances the accuracy through the collaborative strength of MCDM methods. Focusing on 2019–2021 data, this framework evaluates health security performance across Western Asia, emphasizing the differences between income groups. By utilizing D-CRITIC, this study determines the relative importance of health security indicators, while CoCoSo ranks countries based on overall performance, allowing for comparative insights across income classifications. K-means clustering further identifies distinct health security profiles, offering a more comprehensive understanding of the variance between countries, and finally, the Spearman rank correlation method further investigates the relationship between the average performances of financial resource allocation indicators and the COCOSO health security performance scores throughout the study period. This study makes several significant contributions: • It introduces a comprehensive Entropy-CoCoSo-K-means framework that integrates multi-criteria decision-making with dynamic clustering. This approach overcomes the limitations of global indices like the GHSI, allowing for a more detailed analysis of regional health security. • It underscores the differing priorities among Low-Income Countries (LIC), Lower-Middle-Income Countries (LMC), Upper-Middle-Income Countries (UMC), and High-Income Countries (HIC) in the Western Asia region for both 2019 and 2021, highlighting the importance of context-specific policies and timely interventions. • The research maps health security priorities in Western Asia and categorizes member countries into performance clusters from 2017 to 2021, exposing intra-regional disparities and gaps in resilience following the pandemic. • It identifies existing gaps and offers data-driven recommendations for the ongoing evolution of health security in Western Asia. The methodology of this study followed a six-stage process. This section details each stage along with the integrated model used. 2.1. Data Acquisition: The Global Health Security Index (GHSI) as a Tool for Health Security Preparedness Assessment The Global Health Security Index (GHSI) is a comprehensive instrument for assessing countries’ preparedness and capabilities in addressing health security threats. However, its suitability as a research tool warrants careful consideration given the discrepancies observed between predicted and actual performance during the COVID-19 pandemic ( ). The GHSI provides a multidimensional approach to evaluating health security, encompassing various aspects such as prevention (PR), detection, reporting (DR), rapid response (RR), health system capacity (HS), commitment and adherence (CA), and risk environment (RE) ( ; ; ). The GHSI’s publicly accessible data support extensive research opportunities and offer a holistic perspective on a country’s health security preparedness, and its structured approach facilitates robust comparisons among nations, rendering it a potentially valuable tool for our research ( ). The World Bank classification, which categorizes countries into four income groups based on gross national income (GNI) per capita, delineates the 17 Western Asia (WA) countries as follows: Low-income countries (LIC) comprise Syria and Yemen; lower-middle-income countries (LMC) include Jordan and Lebanon; upper-middle-income countries (UMC) encompass Armenia, Azerbaijan, Georgia, Iraq, and Turkey; and high-income countries (HIC) consist of Bahrain, Cyprus, Israel, Kuwait, Oman, Qatar, Saudi Arabia, and the United Arab Emirates. For our comparative analysis, we extracted data for 17 countries in the WA region, including 2 LIC, 2 LMI, 5 UMC, and 8 HIC. To assess health security outcomes across these groups for each indicator, we examined performance metrics from 2019 and 2021 ( . Available at https://ghsindex.org/report-model/ ). Comprehensive data can be found in Tables 1-3 of the supplementary file ( ) (available at: data repository https://doi.org/10.6084/m9.figshare.27992735.v3 ). Furthermore, to assess the relationship between the financial resources allocated to public health preparedness and the effectiveness of resource utilization from one perspective and the health security outcomes in the region from another, this study utilizes GHSI data on two indicators: F1, a financing indicator, and F2, public healthcare spending levels per capita. A financing indicator, in the context of health security, is a metric that quantifies the financial resources dedicated to enhancing a country’s capacity to prevent, detect, and respond to disease outbreaks. This indicator provides a comprehensive representation of the financial investments made in various domains related to health security, such as preparedness funding, commitments based on Joint External Evaluation (JEE) and Performance Verification System (PVS) assessments, emergency response financing, and accountability for international commitments. The second indicator refers to the domestic general government health expenditure per capita. Comprehensive data can be found in Table 4 of the supplementary file ( ) (available at: data repository https://doi.org/10.6084/m9.figshare.27992735.v3 ). 2.2. Applying the D-CRITIC Method to Determine and Compare the Relative Importance of Health Security Indicators across Western Asia and Its Income Groups This study employed the modified version of the CRiteria Importance Through Inter-criteria Correlation (CRITIC) method, namely the Distance Correlation-based CRITIC (D-CRITIC) method to evaluate and compare the relative importance of health security indicators across Western Asia. D-CRITIC improves upon the original CRITIC method by incorporating distance correlation, enabling objective criteria weighting based on both variance and interdependency. This approach was introduced by Krishnan et al. in 2021; it yields more valid and stable weighting and ranking results than traditional CRITIC, thus enhancing decision-making reliability ( ). D-CRITIC offers significant advantages for multi-criteria decision-making (MCDM). By considering both linear and nonlinear relationships between indicators through distance correlation, it enhances the accuracy of objective criteria weighting ( ). Its data-driven approach minimizes subjectivity and potential bias, leading to more equitable, stable, and valid outcomes compared to the CRITIC method ( ). The procedure typically encompasses six distinct steps ( ): Step 1. Creating a decision matrix (DM), denoted by U ij as follows: U ij = u 11 u 12 … . u 1 n … . . … . … . . u m 1 u m 2 … . u mn (1) The score of country i on indicator j is represented by u ij . Additionally, the variables m and n indicate the total number of alternatives (countries) and criteria (health security indicators), respectively. Step 2: Computing the normalized DM using (2): z ij = { u ij u j + , for benefit criteria u j − u ij , for cost criteria (2) In this scenario, u ij denotes the score assigned to alternative i for criterion j , u j + represents the highest score attained for criterion j , and u j − represents the lowest score recorded for criterion j . Step 3: Calculating Standard Deviation ( SD j ) for each criterion: SD j = ∑ j = 1 n ( z ij − z j ¯ ) 2 m − 1 (3) The symbol ( z j ¯ ) represents the arithmetic mean (average) of the normalized values for criterion ‘ j ’ across all ‘ m ’ alternatives. Step 4: Exploring Inter-Criterion Relationships with Distance Correlation defines this calculation for any two criteria, represented as c j , and c j ′ . dCor ( c j , c j ′ ) = dCov ( c j , c j ′ ) sqrt ( dVar ( c j ) dVar ( c j ′ ) ) (4) Here, dCov ( c j , c j ′ ) represents the distance covariance between c j and c j ′ , while dVar ( c j ) = dCov ( c j , c j ) , represents the distance variance of c j , and dVar ( c j ′ ) = dCov ( c j ′ , c j ′ ) represents the distance variance of c j ′ . Step 5: Computing Criterion Information Content: I j = SD j ∑ j ′ = 1 n ( 1 − dCor ( c j , c j ′ ) ) (5) The symbol I j represents a metric that quantifies the amount of information contained within the j th criterion. Step 6: Compute the objective weights. In this step, is used to calculate the objective weight of each jth criterion, reflecting its relative importance in decision-making. w j = I j ∑ j = 1 n I j (5) In our study, this process was repeated eight times to assess the relative importance of health security indicators in the WA, LIC-LMC, UMC, and HIC groups of countries in the region for both 2019 and 2021. Detailed calculations for this step are readily available in supplementary file (Sec. 2) ( ) (Available at https://github.com/ProfAdelAbdulsalam/supplementary-material-and-softwares/tree/1.0.0 ) ( https://doi.org/10.5281/zenodo.14541236 ). 2.3 Assessing and ranking WA countries using a combined D-CRITIC-COCOSO framework The Combined Compromise Solution (CoCoSo) method, introduced by , is a robust tool in Multi-Criteria Decision-Making (MCDM). This method effectively integrates the strengths of simple additive weighting (SAW) and the exponentially weighted product model, making it particularly adept at generating balanced and reliable rankings. Its intuitive design, adaptability, and capacity to deliver actionable insights have made it widely applicable across various decision-making contexts, enhancing its relevance to our study ( ; ; ). It comprises the following implementation steps ( : • Step 1: Constructing Initial and Normalized Decision Matrices In this step, the initial DM ( U ij ) is constructed as described in (1) and then normalized as outlined in (6): y ij = { u ij − u j − u j + − u j − , C is a benefit criteria u ij − u j + u j − − u j + , C is a cost criteria , (6) • Step 2. Computing the weighted comparability sequences, which are derived by (7) and (8): S i = ∑ j = 1 n w j y ij , (7) P i = ∑ j = 1 n ( y ij ) w j , (8) In this context, the weighting vector ( w j ) indicates the relative significance assigned to the j -th indicator, while ( y ij ) represents the normalized performance score of the i-th alternative concerning the j -th indicator. • Step 3: Calculating the evaluation scores of the alternatives At this stage, three distinct evaluation scores are calculated as follows: k ia = ( P i + S i ) / ∑ i = 1 m ( P i + S i ) , (9) k ib = ( S i / min S i ) + ( P i / min P i ) , (10) k ic = ( λ ( S i ) + ( 1 − λ ) P i ) ( λ ( max S i ) + ( 1 − λ ) max P i ) , (11) Here, the parameter λ , ranging from 0 to 1, allows the decision-maker to fine-tune the relative importance of two aggregation techniques in determining the final compromise solution: the additive weighted sum ( S i ) and the exponentially weighted product ( P i ). This parameter serves as a control mechanism for balancing the influence of these two methods. A common practice is to use a value of 0.5, which gives equal weight to both S i and P i , ensuring a balanced consideration of the two approaches in the final outcome. This equal weighting is often employed to achieve a balanced integration of both techniques in the ultimate result. Step 4: Calculating the composite score index and rank alternatives Finally, is used to calculate the composite score. C i = ( ( k ia + k ib + k ic ) 1 3 ) + ( ( k ia + k ib + k ic ) 3 ) , (12) In this investigation, the aforementioned process was iterated twice to rank Western Asian countries based on their GHSI scores for both 2019 and 2021. The alternatives are arranged in descending order according to their calculated scores. For a comparative analysis, Western Asian-ranked countries were further categorized by income group. Detailed calculations for this procedure are available in the supplementary file (Sec. 3) ( ) (Available at https://github.com/ProfAdelAbdulsalam/supplementary-material-and-softwares/tree/1.0.0 ) ( https://doi.org/10.5281/zenodo.14541236 ). 2.4 Clustering Western Asian Countries Based on Their Composite Score Index Clustering is a foundational unsupervised machine learning technique that organizes unlabeled data points into distinct groups (clusters) based on shared characteristics or similarities, such as distance metrics or density patterns ( ; ). Unlike supervised learning, which relies on labeled outcomes to train models, clustering identifies inherent structures within datasets without prior knowledge of categories or results. Algorithms like K-means and hierarchical clustering enable this process, revealing hidden patterns that support tasks such as anomaly detection or infection detection ( ; ). As a cornerstone of exploratory data analysis, clustering empowers researchers to uncover latent relationships, trends, or subgroups within complex datasets, driving hypothesis generation and informed decision-making ( ). K-means clustering is a widely used learning algorithm for data mining and pattern recognition ( ). It partitions n observations into k clusters, where each observation belongs to the cluster with the nearest mean. The algorithm operates by iteratively assigning data points to the closest cluster center and subsequently recalculating the cluster centers based on new assignments ( ). This method is well known for its efficiency, simplicity, and adaptability across various fields, making it highly effective for analyzing health security data. It was utilized to categorize countries in multiple health security-based studies, including those in Africa (Nasser and Alghawli, 2024) and the Eastern Mediterranean Region (Nasser et al. 2025), according to their health security (HeS) scores. The advantages of K-means include computational efficiency and ease of implementation, rendering it suitable for large datasets ( ). Additionally, this technique groups alternatives based on their similarities, reduces variance within each cluster, and ensures that the items in each group share similar HeS attributes. For these reasons, K-means is particularly appropriate for our analysis, yielding results that are clear and easy to interpret. The standard K-means algorithm follows these steps ( ): 1) initialize k cluster centers, often randomly; 2) assign each data point to the nearest cluster center; 3) recalculate the cluster centers based on the new assignments; 4) repeat steps 2 and 3 until convergence or a maximum number of iterations is reached. In this study, the aforementioned process was also repeated twice to cluster Western Asian countries based on their D-CRITIC-COCOSO scores for both 2019 and 2021. The countries were clustered using a five-tiered grading system, ranging from “excellent” (Grade 1) to “poor” (Grade 5). The Western Asian-clustered countries were further divided into income tiers for comparative research. The supplementary file contains detailed calculations for this process (Sec.4) (Available at https://doi.org/10.6084/m9.figshare.27992735.v3 ) ( ). 2.5 Examining the correlation between independent financial resource allocation indicators and D-CRITIC-COCOSO Health Security Outcomes We utilized the Spearman rank correlation method to investigate the relationship between the average performances of financial resource allocation indicators (financing (F1), and public healthcare spending levels per capita (F2)) and the mean COCOSO health security performance scores (Ci) throughout the study period. This non-parametric technique is particularly appropriate for evaluating the association between two variables without assuming specific underlying distributions ( ). The Spearman correlation coefficient (ρ) was determined by independently ranking the sample values of both variables and then inserting the squared differences between these ranks into a formula derived from the Pearson correlation ( ). This approach is beneficial as it emphasizes the relative positions of the values rather than their absolute disparities, making it resilient to outliers and nonlinear relationships ( ). An alternative method involves converting the values of both variables into standard scores, ranking the combined standard scores in a single sequence, and then calculating the Pearson correlation between the ranks corresponding to the original scores. This modified approach has been shown to be slightly more powerful than the conventional Spearman method for various distributions and sample sizes, ranging from 8 to 30 ( ). In summary, the Spearman rank correlation method was chosen because of its ability to assess the strength and direction of the monotonic relationship between resource allocation indicators and COCOSO scores. This technique is particularly effective in situations where the relative rankings are more important than the exact values, making it well suited for comparing performance indicators across different domains ( ; ). 2.5 Comparative analysis This phase consolidates and evaluates the outcomes from previous stages, examining the comparative importance of indicators, national rankings, and grouping trends across Western Asia and its economic subdivisions. The evaluation centered on uncovering regional inequalities and crucial patterns in health security performance for both 2019 and 2021. The outcomes will guide the creation of specific suggestions to improve health security readiness in each income category and the region overall. The key findings are outlined in the following sections. 2.6 Software To determine the relative significance of health security indicators across Western Asian nations, we developed two advanced Excel-based software tools. Supplementary Software 1 - Distance Correlation-Based CRITIC Software. Source software available from (Available at https://github.com/ProfAdelAbdulsalam/supplementary-material-and-softwares/tree/1.0.0 ). Archived software available from ( https://doi.org/10.5281/zenodo.14541236 ) ( ), License: OSI approved open license software is under GNU General Public License v3.0). This tool enables a detailed analysis by categorizing countries into income groups—Low-Income Countries (LIC), Lower-Middle-Income Countries (LMC), Upper-Middle-Income Countries (UMC), and High-Income Countries (HIC)—and provides valuable insights for the years 2019 and 2021. This tool specifically focuses on analyzing health security in low-income and lower-middle-income countries. For the ranking analysis, we created Supplementary Software 2—An Integrated GHSI, MCDM, and Clustering Model for Health Security Analysis in Western Asia. Source software available from: ( https://github.com/ProfAdelAbdulsalam/supplementary-material-and-softwares/tree/1.0.0 ). Archived software available from ( https://doi.org/10.5281/zenodo.14541236 ) ( ), License: OSI approved open license software is under GNU General Public License v3.0). Supplementary Software 2 evaluates the health security performance of these nations for both 2019 and 2021, integrating weighting, ranking, clustering, and Spearman rank correlation analysis into a unified Excel-based tool, offering a robust framework for health security performance assessment. For clustering analysis, the software was utilized, which is available for free download at Cluster Analysis for Marketing - Free Download ( https://www.clusteranalysis4marketing.com/a-marketers-guide-to-cluster-analysis/free-download/ ). The Global Health Security Index (GHSI) is a comprehensive instrument for assessing countries’ preparedness and capabilities in addressing health security threats. However, its suitability as a research tool warrants careful consideration given the discrepancies observed between predicted and actual performance during the COVID-19 pandemic ( ). The GHSI provides a multidimensional approach to evaluating health security, encompassing various aspects such as prevention (PR), detection, reporting (DR), rapid response (RR), health system capacity (HS), commitment and adherence (CA), and risk environment (RE) ( ; ; ). The GHSI’s publicly accessible data support extensive research opportunities and offer a holistic perspective on a country’s health security preparedness, and its structured approach facilitates robust comparisons among nations, rendering it a potentially valuable tool for our research ( ). The World Bank classification, which categorizes countries into four income groups based on gross national income (GNI) per capita, delineates the 17 Western Asia (WA) countries as follows: Low-income countries (LIC) comprise Syria and Yemen; lower-middle-income countries (LMC) include Jordan and Lebanon; upper-middle-income countries (UMC) encompass Armenia, Azerbaijan, Georgia, Iraq, and Turkey; and high-income countries (HIC) consist of Bahrain, Cyprus, Israel, Kuwait, Oman, Qatar, Saudi Arabia, and the United Arab Emirates. For our comparative analysis, we extracted data for 17 countries in the WA region, including 2 LIC, 2 LMI, 5 UMC, and 8 HIC. To assess health security outcomes across these groups for each indicator, we examined performance metrics from 2019 and 2021 ( . Available at https://ghsindex.org/report-model/ ). Comprehensive data can be found in Tables 1-3 of the supplementary file ( ) (available at: data repository https://doi.org/10.6084/m9.figshare.27992735.v3 ). Furthermore, to assess the relationship between the financial resources allocated to public health preparedness and the effectiveness of resource utilization from one perspective and the health security outcomes in the region from another, this study utilizes GHSI data on two indicators: F1, a financing indicator, and F2, public healthcare spending levels per capita. A financing indicator, in the context of health security, is a metric that quantifies the financial resources dedicated to enhancing a country’s capacity to prevent, detect, and respond to disease outbreaks. This indicator provides a comprehensive representation of the financial investments made in various domains related to health security, such as preparedness funding, commitments based on Joint External Evaluation (JEE) and Performance Verification System (PVS) assessments, emergency response financing, and accountability for international commitments. The second indicator refers to the domestic general government health expenditure per capita. Comprehensive data can be found in Table 4 of the supplementary file ( ) (available at: data repository https://doi.org/10.6084/m9.figshare.27992735.v3 ). This study employed the modified version of the CRiteria Importance Through Inter-criteria Correlation (CRITIC) method, namely the Distance Correlation-based CRITIC (D-CRITIC) method to evaluate and compare the relative importance of health security indicators across Western Asia. D-CRITIC improves upon the original CRITIC method by incorporating distance correlation, enabling objective criteria weighting based on both variance and interdependency. This approach was introduced by Krishnan et al. in 2021; it yields more valid and stable weighting and ranking results than traditional CRITIC, thus enhancing decision-making reliability ( ). D-CRITIC offers significant advantages for multi-criteria decision-making (MCDM). By considering both linear and nonlinear relationships between indicators through distance correlation, it enhances the accuracy of objective criteria weighting ( ). Its data-driven approach minimizes subjectivity and potential bias, leading to more equitable, stable, and valid outcomes compared to the CRITIC method ( ). The procedure typically encompasses six distinct steps ( ): Step 1. Creating a decision matrix (DM), denoted by U ij as follows: U ij = u 11 u 12 … . u 1 n … . . … . … . . u m 1 u m 2 … . u mn (1) The score of country i on indicator j is represented by u ij . Additionally, the variables m and n indicate the total number of alternatives (countries) and criteria (health security indicators), respectively. Step 2: Computing the normalized DM using (2): z ij = { u ij u j + , for benefit criteria u j − u ij , for cost criteria (2) In this scenario, u ij denotes the score assigned to alternative i for criterion j , u j + represents the highest score attained for criterion j , and u j − represents the lowest score recorded for criterion j . Step 3: Calculating Standard Deviation ( SD j ) for each criterion: SD j = ∑ j = 1 n ( z ij − z j ¯ ) 2 m − 1 (3) The symbol ( z j ¯ ) represents the arithmetic mean (average) of the normalized values for criterion ‘ j ’ across all ‘ m ’ alternatives. Step 4: Exploring Inter-Criterion Relationships with Distance Correlation defines this calculation for any two criteria, represented as c j , and c j ′ . dCor ( c j , c j ′ ) = dCov ( c j , c j ′ ) sqrt ( dVar ( c j ) dVar ( c j ′ ) ) (4) Here, dCov ( c j , c j ′ ) represents the distance covariance between c j and c j ′ , while dVar ( c j ) = dCov ( c j , c j ) , represents the distance variance of c j , and dVar ( c j ′ ) = dCov ( c j ′ , c j ′ ) represents the distance variance of c j ′ . Step 5: Computing Criterion Information Content: I j = SD j ∑ j ′ = 1 n ( 1 − dCor ( c j , c j ′ ) ) (5) The symbol I j represents a metric that quantifies the amount of information contained within the j th criterion. Step 6: Compute the objective weights. In this step, is used to calculate the objective weight of each jth criterion, reflecting its relative importance in decision-making. w j = I j ∑ j = 1 n I j (5) In our study, this process was repeated eight times to assess the relative importance of health security indicators in the WA, LIC-LMC, UMC, and HIC groups of countries in the region for both 2019 and 2021. Detailed calculations for this step are readily available in supplementary file (Sec. 2) ( ) (Available at https://github.com/ProfAdelAbdulsalam/supplementary-material-and-softwares/tree/1.0.0 ) ( https://doi.org/10.5281/zenodo.14541236 ). The Combined Compromise Solution (CoCoSo) method, introduced by , is a robust tool in Multi-Criteria Decision-Making (MCDM). This method effectively integrates the strengths of simple additive weighting (SAW) and the exponentially weighted product model, making it particularly adept at generating balanced and reliable rankings. Its intuitive design, adaptability, and capacity to deliver actionable insights have made it widely applicable across various decision-making contexts, enhancing its relevance to our study ( ; ; ). It comprises the following implementation steps ( : • Step 1: Constructing Initial and Normalized Decision Matrices In this step, the initial DM ( U ij ) is constructed as described in (1) and then normalized as outlined in (6): y ij = { u ij − u j − u j + − u j − , C is a benefit criteria u ij − u j + u j − − u j + , C is a cost criteria , (6) • Step 2. Computing the weighted comparability sequences, which are derived by (7) and (8): S i = ∑ j = 1 n w j y ij , (7) P i = ∑ j = 1 n ( y ij ) w j , (8) In this context, the weighting vector ( w j ) indicates the relative significance assigned to the j -th indicator, while ( y ij ) represents the normalized performance score of the i-th alternative concerning the j -th indicator. • Step 3: Calculating the evaluation scores of the alternatives At this stage, three distinct evaluation scores are calculated as follows: k ia = ( P i + S i ) / ∑ i = 1 m ( P i + S i ) , (9) k ib = ( S i / min S i ) + ( P i / min P i ) , (10) k ic = ( λ ( S i ) + ( 1 − λ ) P i ) ( λ ( max S i ) + ( 1 − λ ) max P i ) , (11) Here, the parameter λ , ranging from 0 to 1, allows the decision-maker to fine-tune the relative importance of two aggregation techniques in determining the final compromise solution: the additive weighted sum ( S i ) and the exponentially weighted product ( P i ). This parameter serves as a control mechanism for balancing the influence of these two methods. A common practice is to use a value of 0.5, which gives equal weight to both S i and P i , ensuring a balanced consideration of the two approaches in the final outcome. This equal weighting is often employed to achieve a balanced integration of both techniques in the ultimate result. Step 4: Calculating the composite score index and rank alternatives Finally, is used to calculate the composite score. C i = ( ( k ia + k ib + k ic ) 1 3 ) + ( ( k ia + k ib + k ic ) 3 ) , (12) In this investigation, the aforementioned process was iterated twice to rank Western Asian countries based on their GHSI scores for both 2019 and 2021. The alternatives are arranged in descending order according to their calculated scores. For a comparative analysis, Western Asian-ranked countries were further categorized by income group. Detailed calculations for this procedure are available in the supplementary file (Sec. 3) ( ) (Available at https://github.com/ProfAdelAbdulsalam/supplementary-material-and-softwares/tree/1.0.0 ) ( https://doi.org/10.5281/zenodo.14541236 ). Clustering is a foundational unsupervised machine learning technique that organizes unlabeled data points into distinct groups (clusters) based on shared characteristics or similarities, such as distance metrics or density patterns ( ; ). Unlike supervised learning, which relies on labeled outcomes to train models, clustering identifies inherent structures within datasets without prior knowledge of categories or results. Algorithms like K-means and hierarchical clustering enable this process, revealing hidden patterns that support tasks such as anomaly detection or infection detection ( ; ). As a cornerstone of exploratory data analysis, clustering empowers researchers to uncover latent relationships, trends, or subgroups within complex datasets, driving hypothesis generation and informed decision-making ( ). K-means clustering is a widely used learning algorithm for data mining and pattern recognition ( ). It partitions n observations into k clusters, where each observation belongs to the cluster with the nearest mean. The algorithm operates by iteratively assigning data points to the closest cluster center and subsequently recalculating the cluster centers based on new assignments ( ). This method is well known for its efficiency, simplicity, and adaptability across various fields, making it highly effective for analyzing health security data. It was utilized to categorize countries in multiple health security-based studies, including those in Africa (Nasser and Alghawli, 2024) and the Eastern Mediterranean Region (Nasser et al. 2025), according to their health security (HeS) scores. The advantages of K-means include computational efficiency and ease of implementation, rendering it suitable for large datasets ( ). Additionally, this technique groups alternatives based on their similarities, reduces variance within each cluster, and ensures that the items in each group share similar HeS attributes. For these reasons, K-means is particularly appropriate for our analysis, yielding results that are clear and easy to interpret. The standard K-means algorithm follows these steps ( ): 1) initialize k cluster centers, often randomly; 2) assign each data point to the nearest cluster center; 3) recalculate the cluster centers based on the new assignments; 4) repeat steps 2 and 3 until convergence or a maximum number of iterations is reached. In this study, the aforementioned process was also repeated twice to cluster Western Asian countries based on their D-CRITIC-COCOSO scores for both 2019 and 2021. The countries were clustered using a five-tiered grading system, ranging from “excellent” (Grade 1) to “poor” (Grade 5). The Western Asian-clustered countries were further divided into income tiers for comparative research. The supplementary file contains detailed calculations for this process (Sec.4) (Available at https://doi.org/10.6084/m9.figshare.27992735.v3 ) ( ). We utilized the Spearman rank correlation method to investigate the relationship between the average performances of financial resource allocation indicators (financing (F1), and public healthcare spending levels per capita (F2)) and the mean COCOSO health security performance scores (Ci) throughout the study period. This non-parametric technique is particularly appropriate for evaluating the association between two variables without assuming specific underlying distributions ( ). The Spearman correlation coefficient (ρ) was determined by independently ranking the sample values of both variables and then inserting the squared differences between these ranks into a formula derived from the Pearson correlation ( ). This approach is beneficial as it emphasizes the relative positions of the values rather than their absolute disparities, making it resilient to outliers and nonlinear relationships ( ). An alternative method involves converting the values of both variables into standard scores, ranking the combined standard scores in a single sequence, and then calculating the Pearson correlation between the ranks corresponding to the original scores. This modified approach has been shown to be slightly more powerful than the conventional Spearman method for various distributions and sample sizes, ranging from 8 to 30 ( ). In summary, the Spearman rank correlation method was chosen because of its ability to assess the strength and direction of the monotonic relationship between resource allocation indicators and COCOSO scores. This technique is particularly effective in situations where the relative rankings are more important than the exact values, making it well suited for comparing performance indicators across different domains ( ; ). This phase consolidates and evaluates the outcomes from previous stages, examining the comparative importance of indicators, national rankings, and grouping trends across Western Asia and its economic subdivisions. The evaluation centered on uncovering regional inequalities and crucial patterns in health security performance for both 2019 and 2021. The outcomes will guide the creation of specific suggestions to improve health security readiness in each income category and the region overall. The key findings are outlined in the following sections. To determine the relative significance of health security indicators across Western Asian nations, we developed two advanced Excel-based software tools. Supplementary Software 1 - Distance Correlation-Based CRITIC Software. Source software available from (Available at https://github.com/ProfAdelAbdulsalam/supplementary-material-and-softwares/tree/1.0.0 ). Archived software available from ( https://doi.org/10.5281/zenodo.14541236 ) ( ), License: OSI approved open license software is under GNU General Public License v3.0). This tool enables a detailed analysis by categorizing countries into income groups—Low-Income Countries (LIC), Lower-Middle-Income Countries (LMC), Upper-Middle-Income Countries (UMC), and High-Income Countries (HIC)—and provides valuable insights for the years 2019 and 2021. This tool specifically focuses on analyzing health security in low-income and lower-middle-income countries. For the ranking analysis, we created Supplementary Software 2—An Integrated GHSI, MCDM, and Clustering Model for Health Security Analysis in Western Asia. Source software available from: ( https://github.com/ProfAdelAbdulsalam/supplementary-material-and-softwares/tree/1.0.0 ). Archived software available from ( https://doi.org/10.5281/zenodo.14541236 ) ( ), License: OSI approved open license software is under GNU General Public License v3.0). Supplementary Software 2 evaluates the health security performance of these nations for both 2019 and 2021, integrating weighting, ranking, clustering, and Spearman rank correlation analysis into a unified Excel-based tool, offering a robust framework for health security performance assessment. For clustering analysis, the software was utilized, which is available for free download at Cluster Analysis for Marketing - Free Download ( https://www.clusteranalysis4marketing.com/a-marketers-guide-to-cluster-analysis/free-download/ ). 4.1 Relative Importance of Health Security Indicators in WA region (2019 vs 2021) Generally, in the D-CRITIC method, an indicator’s weight is directly linked to its information content (IC), with higher weights assigned to indicators that offer greater unique information, as measured by distance correlation. This adjustment ensures that indicators contributing more valuable information are weighted more heavily than those with less distinct content ( ). and and present the computed information content values and corresponding weights. 4.2 Health security performance ranking and clustering results summarizes the results of a COCOSO analysis and K-means clustering on the health security performance of WA countries for 2019 and 2021. Countries are divided into LIC, LMC, UMC and HIC groups. Key metrics include the health security score ( C i ), regional rank ( R i ), cluster membership ( S i ), and changes in scores and ranks over the two years. Higher C i and R i indicate better performance, while cluster membership ranges from Cluster 1 (best) to Cluster 5 (worst). 4.3 The relationships between independent financial resource allocation indicators and D-CRITIC-COCOSO Health Security Outcomes Our study used the Spearman rank correlation method to analyse the relationship between financial resource allocation indicators, specifically financing for health security initiatives (F1) and public healthcare spending per capita (F2), with overall health security performance scores (F3) across 17 countries. We found strong, positive correlations between both financial indicators and health security performance, with correlation coefficients of 0.98 for F1 and F3 and 0.97 for F2 and F3. These findings suggest a substantial association between financial investment in health security and the capacity to prevent, detect, and respond effectively to health threats. 4.4 Discussion 4.4.1 Health security priorities in western Asia (2019-2021) Our analysis reveals a consistent emphasis on strengthening foundational health infrastructure, highlighted by the sustained high weight of the Health System (HS) indicator, which received weights of 0.227 in 2019 and 0.189 in 2021. This underscores the robust commitment to developing a resilient healthcare system capable of addressing diverse health challenges. These findings align with the ASPR’s National Health Security Strategy (2019–2022), which emphasizes the necessity of a strong health system to manage emerging health threats and to enhance resilience against future crises ( ). The significant weight assigned to the Detection and Reporting (DR) indicator further illustrates the importance of timely and accurate information in health security. With weights of 0.189 in 2019 and 0.186 in 2021, this underscores the critical role of robust disease surveillance and reporting systems in mitigating public health emergencies, resonating with the U.S. government’s Global Health Security Strategy ( ). While foundational health infrastructure and detection/reporting mechanisms remain central to Western Asia’s health security strategy, a noteworthy shift towards prioritizing rapid response (RR) and risk environment (RE) management has emerged in 2021. This change reflects the dynamic nature of health security challenges, as illustrated by the COVID-19 pandemic, which necessitates adaptive policymaking ( ). The third-ranking indicator, commitments and adherence (CA), also gained prominence, with weights of 0.167 and 0.174 in 2019 and 2021, respectively. This trend indicates that policymakers are increasingly recognizing the importance of maintaining health protocols and international commitments as foundational for effective health security. Conversely, indicators such as RE, Prevention (PR), and RR initially exhibited lower weights, suggesting that they were deprioritized in favor of foundational elements. However, the increased weights for RR (0.157) and RE (0.156) in 2021 indicate a strategic pivot towards enhancing preparedness for immediate health crises and acknowledging the impacts of environmental health risks. This adaptive approach in health security policymaking is further supported by studies highlighting the effectiveness of Rapid Response Teams (RRTs) in reducing mortality rates and improving patient outcomes in critical situations ( ). For instance, a systematic review demonstrated that RRT implementation significantly decreased the incidence of cardiac arrest and overall mortality in hospitals, emphasizing the essential role of timely intervention during emergencies. Additionally, the increased emphasis on RE reflects a growing awareness of how environmental factors can exacerbate health crises, as outlined by the Asia-Pacific Health Security Action Framework ( ), which advocates for multispectral strategies to strengthen resilience against public health emergencies. Our findings suggest that resource allocation should continue to prioritize foundational health systems and robust detection/reporting capabilities while also addressing the evolving need for effective rapid response mechanisms and comprehensive environmental risk management. Moreover, this strategic shift toward a more adaptive health security strategy addresses gaps identified by the Global Health Security Index (GHSI), which highlights deficiencies in public health emergency response plans in several countries ( ). By prioritizing foundational infrastructure, detection/reporting, rapid response, and risk environment management, Western Asia is making strides to enhance its overall health security. The dynamic nature of health threats, especially as highlighted by the COVID-19 pandemic, necessitates an adaptive approach to health security policymaking ( ). The increased weights for RR and RE indicate a dual focus on responding to immediate crises and preparing for the future challenges posed by infectious diseases and environmental hazards. This aligns with assertion that effective decision-making requires continuous evaluation and comparison. By focusing on these key indicators, policymakers can significantly enhance health security outcomes, ultimately fostering a more resilient public health landscape in Western Asia. 4.4.2 Health security priorities across income groups and and showcase the changes in D-CRITIC weightings of health security indicators for different income groups (HIC, UMC, LIC/LMC) from 2019 to 2021. High-income countries (HICs) consistently prioritized detection and reporting (DR), health systems (HS), and commitment and adherence (CA) in both years. In 2019, DR led to 0.277, followed by HS at 0.212 and CA at 0.172. By 2021, DR remains the top priority at 0.258, CA increases to 0.208, and HS decreases slightly to 0.172. This minor shift indicates slight reprioritization, with CA gaining prominence. Prevention scores remained consistently low, suggesting that this was not a primary concern for HICs. Upper-middle-income countries (UMCs) focus on health systems (HS), detection and reporting (DR), and risk environments (RE) as their main priorities. In 2019, HS had the highest weighting at 0.279, followed by DR at 0.175 and RE at 0.170. A notable shift occurred in 2021, with RE gaining importance and increasing to 0.210, surpassing DR at 0.197. This change likely reflects an increased awareness of environmental and external risks. HS remained a priority despite a slight decrease in weighting, whereas commitments and adherence continued to rank relatively low, similar to HICs. Low- and lower-middle-income countries (LICs/LMCs) have experienced a significant shift from 2019 to 2021. In 2019, DR (0.243), HS (0.195), and CA (0.176) were the focus. However, by 2021, prevention (PR), DR, and rapid response (RR) have emerged as top-weighted indicators. PR showed a substantial increase to 0.261, with a DR of 0.248 and an RR of 0.179. This change indicates a greater emphasis on proactive health security measures, with prevention becoming the primary focus. The lower weighting for HS may suggest resource limitations or the prioritization of other immediate needs over structural healthcare improvements. Across all income groups, DR consistently received high weights, highlighting a universal priority for early health threat identification. HS showed a general decline in weighting over time, particularly in LICs and LMCs, possibly due to resource constraints that limit structural investments. The significant increase in the prevention of LICs and LMCs by 2021 may reflect the growing focus on preventive healthcare measures. These findings indicate evolving health security priorities, with LICs and LMCs increasingly emphasizing prevention, whereas HICs and UMCs focus on detection capabilities. This shift may be influenced by region-specific challenges and capacity differences, especially in response to the post-2019 health security events. 4.4.3 Ranking and clustering results This study’s regional analysis across Western Asia (WA) from 2019 to 2021 reveals important trends in health security, particularly as countries faced the COVID-19 pandemic’s far-reaching impacts. Results show a mixed picture: while a few countries improved or maintained their positions within their clusters, others experienced notable declines, underscoring persistent health security gaps across the region. These findings highlight the effects of shared vulnerabilities, resource constraints, and the role of international partnerships in bolstering public health systems. The implications of these results span clusters and point to strategic areas for policy intervention, targeted resource allocation, and regional collaboration. • Cluster 1: High Performers with Relative Declines As shown in , countries in Cluster 1, representing the highest health security performers in WA,, display trends that reveal both resilience and vulnerability in the face of regional pressures. Armenia, for instance, experienced a substantial decline in its Composite Index (Ci) score, dropping from 16.341 in 2019 to 9.635 by 2021. This significant reduction, attributed to internal challenges and exacerbated by pandemic-related constraints, mirrors findings by , who noted similar declines in regional health resilience under shared resource pressures ( ). Armenia’s retention of its top rank within WA, despite this decline, suggests that other countries in the region similarly struggled with pandemic challenges and that the gap between higher and lower performers may be widening. In contrast, Qatar’s health security investments yielded an improved regional ranking, moving from 6th to 4th between 2019 and 2021. Qatar’s progress highlights the role of high-income countries (HIC) in leveraging resources for health system reforms, as shown by , who documented how targeted investments enhance resilience and crisis response ( ). Georgia’s ascension to the second rank within Cluster 1 demonstrates the critical impact of sustained international partnerships, specifically its collaboration with the CDC, which has supported Georgia’s health infrastructure over decades ( ; ). This finding underscores the need for continuous external support to strengthen regional health security, especially for countries that lack the resources to maintain advanced health infrastructure independently. • Cluster 2: Incremental Improvements in Health Capacity Analysis of Cluster 2 countries shows modest but notable improvements, particularly in detection (DR) and health system capacity (HS) metrics. For instance, Jordan and Cyprus moved from the third to the second cluster, although their regional rankings remained stable at seventh and eighth, respectively. Jordan’s 19% improvement in DR and 17% increase in HS scores underscore the role of digital health investments, including telemedicine, in bolstering health capacities during the COVID-19 pandemic, as demonstrated ( ). Similarly, Cyprus’s gains in DR and HS indicate that even modest investments in health technology and infrastructure can have significant impacts on national health security. Despite these improvements, other high-income countries (HICs) in Cluster 2, such as Israel and Saudi Arabia, experienced slight declines in their regional rankings, falling from 2nd to 5th and 5th to 6th, respectively. This suggests that without sustained investment and regional cooperation, even high-income nations can see stagnation or regression in health security performance. Regional collaboration on health infrastructure, data sharing, and workforce development could provide a more stable foundation for health security in the region, as emphasized in studies on health resilience in high-income nations facing systemic constraints ( ). • Cluster 3: Variable Trends and Rising Health System Needs The countries in Cluster 3 reveal both improvements and challenges. An increase in the number of WA countries from five (29%) in 2019 to six (35%) in 2021, as shown in , reflects a slight increase in health security needs in this cluster, where most nations are either high-income (67%), lower-middle-income (LMC) (17%), or upper-middle-income (UMC) (17%). Oman’s improvement from the 10th to the 9th regional rank demonstrates incremental progress, likely supported by targeted investments in health capacity and detection infrastructure. Conversely, Bahrain and Kuwait experienced minor declines in their rankings, highlighting the need for continued investment in preventive and rapid response capabilities to sustain health security gains. Lebanon and Azerbaijan’s advancement in regional and cluster rankings suggests that even under economic constraints, countries can improve health security by prioritizing critical areas, such as prevention (PR) and detection (DR). Research indicates that optimizing resource allocation towards these core functions, especially in times of crisis, can significantly improve a country’s health system resilience ( ). The trends in Cluster 3 suggest that WA countries could benefit from a model that emphasizes foundational health security investments even in the absence of extensive resources to ensure sustainability and incremental progress. • Clusters 4 and 5: Persistent Vulnerabilities in Low-Resource Nations The most concerning findings relate to Clusters 4 and 5, where low-income and conflict-affected countries, such as Iraq, Yemen, and Syria, have faced regressions or stagnation in health security capabilities. Despite modest increases in certain health scores, Iraq’s decline from the fourth to the fifth cluster indicates insufficient systemic resilience to maintain previous gains. Iraq’s lower detection (DR) and prevention (PR) scores indicate gaps in disease surveillance and public health infrastructure; areas that are crucial for crisis response are often neglected in low-resource settings ( ). Syria and Yemen’s consistent positioning in the fifth cluster reveals systemic public health weaknesses exacerbated by ongoing political and economic instability. Yemen’s PR score of just 0.8 in 2021 underscores the severe limitations in preventive health capabilities, leaving the country highly vulnerable to disease outbreaks. Similarly, Syria’s low DR and HS scores illustrate critical deficiencies in early detection and health-system capacity, which is consistent with studies noting that crisis-affected regions require a comprehensive approach that combines infrastructure development, workforce training, community engagement, and policy reform to achieve long-term health security ( ). 4.4.3 Spearman rank correlation results The near-perfect correlations between financial resource allocation indicators, specifically financing for health security initiatives (F1) and public healthcare spending per capita (F2), with overall health security performance scores (Ci) across WA countries, underscore that financial investment plays a critical role in bolstering health security. Higher levels of both targeted health security financing (F1) (0.98) and general public healthcare spending (F2) (0.97) were consistently associated with higher health security performance (F3). This relationship is particularly relevant for countries with substantial financial inputs, such as Armenia and Georgia, which show high health security scores, reflecting the effectiveness of financial investments in achieving improved preparedness and response capabilities. Conversely, countries with minimal or no spending in these categories, such as Syria and Yemen, exhibit some of the lowest health security scores, highlighting a likely resource gap in public health infrastructure that limits their resilience against health threats. 4.5 Implications and recommendations The observed trends indicate shifting priorities in health security strategies tailored to resource levels and specific risks faced by different income groups. High-income and upper-middle-income countries continue to prioritize detection and reporting, emphasizing the early identification of health threats to support rapid responses. By 2021, low- and lower-middle-income countries are increasingly focusing on prevention as a top priority, reflecting a proactive approach in contexts where reactive capacities may be limited. The lower weighting for health systems across all groups suggests either resource limitations or strategic reallocation toward other critical health security areas, possibly in response to post-2019 health events. These evolving priorities underscore diverse approaches to health security, reflecting each income group’s unique challenges, resources, and public health goals. Furthermore, this study highlights considerable disparities in health resilience among countries shaped by income levels, resource availability, and political stability. High-income countries (HICs), such as Qatar and Georgia, improved their health security rankings by leveraging financial investments and strategic international partnerships, while conflict-affected, low-resource nations, namely Iraq, Yemen, and Syria, experienced stagnation or decline. These findings underscore the urgent need for a regional framework to address health security inequities, bridge the gap between high- and low-income nations, and foster sustainable public health resilience across the WA. The observed disparities align with the broader global patterns seen in low-income and politically unstable regions, where limitations in funding and infrastructure often constrain health security progress. Iraq’s mixed performance—with some improvements in detection and reporting (DR) but an overall cluster decline—illustrates the challenge of sustaining health system gains in volatile settings without consistent, comprehensive investment and policy commitment. Similarly, Yemen’s persistently low scores in prevention (PR) mirror challenges observed in sub-Saharan Africa, where under-resourced health systems struggle to prioritize preventive health measures, leading to high public health burdens and limited security capabilities ( ). The results indicated that the challenges faced by Iraq, Yemen, and Syria in health security are intricately linked to substantial resource allocation issues. For Iraq, the decline from the fourth to the fifth cluster, despite some advances in specific areas, such as detection, suggests that targeted investments without a holistic, integrated approach may be insufficient. Improvements in specific functions such as detection capabilities offer limited, temporary benefits if not supported by robust health infrastructure, workforce training, and preventive care, which are necessary for long-term resilience ( ). The persistently low PR and DR scores for Yemen and Syria emphasize the urgent need for foundational health infrastructure investments, particularly in crisis-prone regions where preventive measures are essential to mitigate disease spread. Research in crisis settings highlights that strengthening early detection and preventive health measures reduces the spread of communicable diseases and alleviates public health burden over time ( ). Allocating resources to these core health security areas, such as early intervention, surveillance, and effective disease management, can enhance health resilience and reduce vulnerability to future crises. The findings of the correlation analysis have important policy implications. First, they suggested that investments in health security financing (F1) and general healthcare spending (F2) are likely critical for achieving higher health security performance. Governments and international organizations aiming to enhance global health security might consider these financial indicators as key benchmarks for strengthening the health system. By strategically increasing F1 and F2, countries may improve their ability to meet international health security standards and enhance their response capabilities in line with the Joint External Evaluation (JEE) and Performance Verification System (PVS) recommendations. To mitigate the identified challenges, policy interventions that prioritize preventive care, detection, and overall health system resilience are crucial for countries facing stagnation or decline in health security. For Iraq, policy adjustments focused on integrating resources across crisis responses, infrastructure, and workforce development could promote a balanced health system and ensure sustainable improvement. For Yemen and Syria, persistently low PR and DR scores underscore the need for both regional and international support to establish a basic public health infrastructure. Regional cooperation can also standardize protocols, enhance healthcare workforce training, and align public health policies with international standards, thus enabling lower-resource countries to manage public health threats more effectively and reduce their vulnerability to crises. Based on the above analysis, the findings of this study provide practical directions for enhancing health security (HS) from a global perspective, emphasizing actionable pathways to advance health security goals. Policymakers, international organizations, and regional stakeholders must adopt targeted strategies to address disparities across income groups and performance clusters as follows: 1. Income-Specific Resource Allocation : High-income countries (HICs) like Qatar and the UAE should prioritize investments in advanced detection systems, rapid response mechanisms, and cross-border data-sharing platforms to maintain their leadership in health security. Upper-middle-income countries (UMCs) such as Georgia and Armenia require funding to modernize health infrastructure and manage risks associated with climate change and pandemics. For low- and lower-middle-income countries (LICs/LMCs), including Yemen and Syria, international aid should focus on developing preventive care networks and training community health workers for grassroots surveillance—essential measures to mitigate outbreak risks in resource-constrained settings. 2. Cluster-Specific Interventions : Tailored interventions can further enhance these priorities. High-performing HICs in Cluster 1, such as Qatar and Israel, should establish regional innovation hubs for vaccine development and digital health tools while sharing best practices, such as telemedicine frameworks, to support neighboring countries. Cluster 2 countries like Jordan and Saudi Arabia must address prevention gaps through mobile clinics and behavioral health campaigns, backed by public-private partnerships, while adopting interoperable health systems to improve detection capabilities. For conflict-affected nations in Clusters 4-5, such as Iraq and Yemen, a dual humanitarian-development approach is critical, combining emergency aid with long-term projects like laboratory reconstruction and epidemiologist training, as well as deploying community health workers for malnutrition screening and disease surveillance. Furthermore, from a sustainability perspective, the findings of this study also offer actionable pathways to advance the Sustainable Development Goals (SDGs)—particularly SDG 3, which focuses on ensuring healthy lives and well-being for all in Western Asia. By aligning health security priorities and strategies with SDG 3 targets, such as universal health coverage (UHC), epidemic preparedness, and health system resilience, policymakers can effectively address disparities while promoting equitable progress: 1. Prioritizing Strategic Resource Allocation : HICs like Qatar and the UAE should focus on investing in AI-driven surveillance and cross-border data-sharing platforms. UMCs such as Georgia and Armenia require funding for climate-resilient infrastructure and integrated disease surveillance. LICs and LMCs, including Yemen and Syria, urgently need international aid to establish primary healthcare networks and deploy community health workers (CHWs) for preventive care and outbreak detection. 2. Bridging Health System Gaps : Decision-makers in Western Asia should allocate resources towards bridging the divide between health system strengthening and measurable health security outcomes. By utilizing the identified health security priorities for the studied groups, investments can be directly aligned with SDG targets, enabling an evaluation of how targeted funding translates into tangible improvements. 3. Regional Leadership in Health Innovations : High-performing HICs, such as Qatar and Israel, can spearhead regional vaccine research hubs and share telemedicine frameworks. Moderate performers like Jordan and Saudi Arabia should expand mobile clinic access to reduce maternal and child mortality and implement interoperable health records. Conflict-affected nations in Clusters 4-5 should focus on rebuilding laboratories, training epidemiologists, and deploying CHWs for malnutrition screening and disease detection. 4.6 Limitations and future works The study has limitations that warrant consideration. The temporal scope, limited to 2019–2021, restricts insights into long-term trends or the sustained impact of health security interventions. Additionally, the analysis focuses narrowly on predefined GHSI indicators, overlooking contextual factors such as political instability, cultural practices, or broader Sustainable Development Goal (SDG) measures (e.g., poverty alleviation, clean water access, education equity) that may indirectly shape health security outcomes in Western Asia. Methodologically, the use of K-means clustering with fixed income-based categories risks oversimplifying complex regional disparities. Future research could address these gaps by extending the temporal framework to capture post-pandemic recovery trajectories, integrating mixed-method approaches to incorporate qualitative insights (e.g., stakeholder interviews), and expanding indicators to include region-specific variables like migration patterns, conflict intensity, and SDG progress metrics. Exploring interdependencies between SDG achievements (e.g., SDG 3 on health, SDG 6 on water sanitation) and health security outcomes could reveal synergistic pathways for policy interventions. Additionally, alternative clustering techniques (e.g., hierarchical models) and cross-regional comparisons could enhance methodological robustness and generalizability. Finally, causal analyses would clarify the mechanisms linking financial investments, SDG-aligned programs, and health security outcomes. These advancements would strengthen evidence-based policymaking and foster more equitable, context-sensitive health security strategies in Western Asia and beyond. Generally, in the D-CRITIC method, an indicator’s weight is directly linked to its information content (IC), with higher weights assigned to indicators that offer greater unique information, as measured by distance correlation. This adjustment ensures that indicators contributing more valuable information are weighted more heavily than those with less distinct content ( ). and and present the computed information content values and corresponding weights. summarizes the results of a COCOSO analysis and K-means clustering on the health security performance of WA countries for 2019 and 2021. Countries are divided into LIC, LMC, UMC and HIC groups. Key metrics include the health security score ( C i ), regional rank ( R i ), cluster membership ( S i ), and changes in scores and ranks over the two years. Higher C i and R i indicate better performance, while cluster membership ranges from Cluster 1 (best) to Cluster 5 (worst). Our study used the Spearman rank correlation method to analyse the relationship between financial resource allocation indicators, specifically financing for health security initiatives (F1) and public healthcare spending per capita (F2), with overall health security performance scores (F3) across 17 countries. We found strong, positive correlations between both financial indicators and health security performance, with correlation coefficients of 0.98 for F1 and F3 and 0.97 for F2 and F3. These findings suggest a substantial association between financial investment in health security and the capacity to prevent, detect, and respond effectively to health threats. 4.4.1 Health security priorities in western Asia (2019-2021) Our analysis reveals a consistent emphasis on strengthening foundational health infrastructure, highlighted by the sustained high weight of the Health System (HS) indicator, which received weights of 0.227 in 2019 and 0.189 in 2021. This underscores the robust commitment to developing a resilient healthcare system capable of addressing diverse health challenges. These findings align with the ASPR’s National Health Security Strategy (2019–2022), which emphasizes the necessity of a strong health system to manage emerging health threats and to enhance resilience against future crises ( ). The significant weight assigned to the Detection and Reporting (DR) indicator further illustrates the importance of timely and accurate information in health security. With weights of 0.189 in 2019 and 0.186 in 2021, this underscores the critical role of robust disease surveillance and reporting systems in mitigating public health emergencies, resonating with the U.S. government’s Global Health Security Strategy ( ). While foundational health infrastructure and detection/reporting mechanisms remain central to Western Asia’s health security strategy, a noteworthy shift towards prioritizing rapid response (RR) and risk environment (RE) management has emerged in 2021. This change reflects the dynamic nature of health security challenges, as illustrated by the COVID-19 pandemic, which necessitates adaptive policymaking ( ). The third-ranking indicator, commitments and adherence (CA), also gained prominence, with weights of 0.167 and 0.174 in 2019 and 2021, respectively. This trend indicates that policymakers are increasingly recognizing the importance of maintaining health protocols and international commitments as foundational for effective health security. Conversely, indicators such as RE, Prevention (PR), and RR initially exhibited lower weights, suggesting that they were deprioritized in favor of foundational elements. However, the increased weights for RR (0.157) and RE (0.156) in 2021 indicate a strategic pivot towards enhancing preparedness for immediate health crises and acknowledging the impacts of environmental health risks. This adaptive approach in health security policymaking is further supported by studies highlighting the effectiveness of Rapid Response Teams (RRTs) in reducing mortality rates and improving patient outcomes in critical situations ( ). For instance, a systematic review demonstrated that RRT implementation significantly decreased the incidence of cardiac arrest and overall mortality in hospitals, emphasizing the essential role of timely intervention during emergencies. Additionally, the increased emphasis on RE reflects a growing awareness of how environmental factors can exacerbate health crises, as outlined by the Asia-Pacific Health Security Action Framework ( ), which advocates for multispectral strategies to strengthen resilience against public health emergencies. Our findings suggest that resource allocation should continue to prioritize foundational health systems and robust detection/reporting capabilities while also addressing the evolving need for effective rapid response mechanisms and comprehensive environmental risk management. Moreover, this strategic shift toward a more adaptive health security strategy addresses gaps identified by the Global Health Security Index (GHSI), which highlights deficiencies in public health emergency response plans in several countries ( ). By prioritizing foundational infrastructure, detection/reporting, rapid response, and risk environment management, Western Asia is making strides to enhance its overall health security. The dynamic nature of health threats, especially as highlighted by the COVID-19 pandemic, necessitates an adaptive approach to health security policymaking ( ). The increased weights for RR and RE indicate a dual focus on responding to immediate crises and preparing for the future challenges posed by infectious diseases and environmental hazards. This aligns with assertion that effective decision-making requires continuous evaluation and comparison. By focusing on these key indicators, policymakers can significantly enhance health security outcomes, ultimately fostering a more resilient public health landscape in Western Asia. 4.4.2 Health security priorities across income groups and and showcase the changes in D-CRITIC weightings of health security indicators for different income groups (HIC, UMC, LIC/LMC) from 2019 to 2021. High-income countries (HICs) consistently prioritized detection and reporting (DR), health systems (HS), and commitment and adherence (CA) in both years. In 2019, DR led to 0.277, followed by HS at 0.212 and CA at 0.172. By 2021, DR remains the top priority at 0.258, CA increases to 0.208, and HS decreases slightly to 0.172. This minor shift indicates slight reprioritization, with CA gaining prominence. Prevention scores remained consistently low, suggesting that this was not a primary concern for HICs. Upper-middle-income countries (UMCs) focus on health systems (HS), detection and reporting (DR), and risk environments (RE) as their main priorities. In 2019, HS had the highest weighting at 0.279, followed by DR at 0.175 and RE at 0.170. A notable shift occurred in 2021, with RE gaining importance and increasing to 0.210, surpassing DR at 0.197. This change likely reflects an increased awareness of environmental and external risks. HS remained a priority despite a slight decrease in weighting, whereas commitments and adherence continued to rank relatively low, similar to HICs. Low- and lower-middle-income countries (LICs/LMCs) have experienced a significant shift from 2019 to 2021. In 2019, DR (0.243), HS (0.195), and CA (0.176) were the focus. However, by 2021, prevention (PR), DR, and rapid response (RR) have emerged as top-weighted indicators. PR showed a substantial increase to 0.261, with a DR of 0.248 and an RR of 0.179. This change indicates a greater emphasis on proactive health security measures, with prevention becoming the primary focus. The lower weighting for HS may suggest resource limitations or the prioritization of other immediate needs over structural healthcare improvements. Across all income groups, DR consistently received high weights, highlighting a universal priority for early health threat identification. HS showed a general decline in weighting over time, particularly in LICs and LMCs, possibly due to resource constraints that limit structural investments. The significant increase in the prevention of LICs and LMCs by 2021 may reflect the growing focus on preventive healthcare measures. These findings indicate evolving health security priorities, with LICs and LMCs increasingly emphasizing prevention, whereas HICs and UMCs focus on detection capabilities. This shift may be influenced by region-specific challenges and capacity differences, especially in response to the post-2019 health security events. 4.4.3 Ranking and clustering results This study’s regional analysis across Western Asia (WA) from 2019 to 2021 reveals important trends in health security, particularly as countries faced the COVID-19 pandemic’s far-reaching impacts. Results show a mixed picture: while a few countries improved or maintained their positions within their clusters, others experienced notable declines, underscoring persistent health security gaps across the region. These findings highlight the effects of shared vulnerabilities, resource constraints, and the role of international partnerships in bolstering public health systems. The implications of these results span clusters and point to strategic areas for policy intervention, targeted resource allocation, and regional collaboration. • Cluster 1: High Performers with Relative Declines As shown in , countries in Cluster 1, representing the highest health security performers in WA,, display trends that reveal both resilience and vulnerability in the face of regional pressures. Armenia, for instance, experienced a substantial decline in its Composite Index (Ci) score, dropping from 16.341 in 2019 to 9.635 by 2021. This significant reduction, attributed to internal challenges and exacerbated by pandemic-related constraints, mirrors findings by , who noted similar declines in regional health resilience under shared resource pressures ( ). Armenia’s retention of its top rank within WA, despite this decline, suggests that other countries in the region similarly struggled with pandemic challenges and that the gap between higher and lower performers may be widening. In contrast, Qatar’s health security investments yielded an improved regional ranking, moving from 6th to 4th between 2019 and 2021. Qatar’s progress highlights the role of high-income countries (HIC) in leveraging resources for health system reforms, as shown by , who documented how targeted investments enhance resilience and crisis response ( ). Georgia’s ascension to the second rank within Cluster 1 demonstrates the critical impact of sustained international partnerships, specifically its collaboration with the CDC, which has supported Georgia’s health infrastructure over decades ( ; ). This finding underscores the need for continuous external support to strengthen regional health security, especially for countries that lack the resources to maintain advanced health infrastructure independently. • Cluster 2: Incremental Improvements in Health Capacity Analysis of Cluster 2 countries shows modest but notable improvements, particularly in detection (DR) and health system capacity (HS) metrics. For instance, Jordan and Cyprus moved from the third to the second cluster, although their regional rankings remained stable at seventh and eighth, respectively. Jordan’s 19% improvement in DR and 17% increase in HS scores underscore the role of digital health investments, including telemedicine, in bolstering health capacities during the COVID-19 pandemic, as demonstrated ( ). Similarly, Cyprus’s gains in DR and HS indicate that even modest investments in health technology and infrastructure can have significant impacts on national health security. Despite these improvements, other high-income countries (HICs) in Cluster 2, such as Israel and Saudi Arabia, experienced slight declines in their regional rankings, falling from 2nd to 5th and 5th to 6th, respectively. This suggests that without sustained investment and regional cooperation, even high-income nations can see stagnation or regression in health security performance. Regional collaboration on health infrastructure, data sharing, and workforce development could provide a more stable foundation for health security in the region, as emphasized in studies on health resilience in high-income nations facing systemic constraints ( ). • Cluster 3: Variable Trends and Rising Health System Needs The countries in Cluster 3 reveal both improvements and challenges. An increase in the number of WA countries from five (29%) in 2019 to six (35%) in 2021, as shown in , reflects a slight increase in health security needs in this cluster, where most nations are either high-income (67%), lower-middle-income (LMC) (17%), or upper-middle-income (UMC) (17%). Oman’s improvement from the 10th to the 9th regional rank demonstrates incremental progress, likely supported by targeted investments in health capacity and detection infrastructure. Conversely, Bahrain and Kuwait experienced minor declines in their rankings, highlighting the need for continued investment in preventive and rapid response capabilities to sustain health security gains. Lebanon and Azerbaijan’s advancement in regional and cluster rankings suggests that even under economic constraints, countries can improve health security by prioritizing critical areas, such as prevention (PR) and detection (DR). Research indicates that optimizing resource allocation towards these core functions, especially in times of crisis, can significantly improve a country’s health system resilience ( ). The trends in Cluster 3 suggest that WA countries could benefit from a model that emphasizes foundational health security investments even in the absence of extensive resources to ensure sustainability and incremental progress. • Clusters 4 and 5: Persistent Vulnerabilities in Low-Resource Nations The most concerning findings relate to Clusters 4 and 5, where low-income and conflict-affected countries, such as Iraq, Yemen, and Syria, have faced regressions or stagnation in health security capabilities. Despite modest increases in certain health scores, Iraq’s decline from the fourth to the fifth cluster indicates insufficient systemic resilience to maintain previous gains. Iraq’s lower detection (DR) and prevention (PR) scores indicate gaps in disease surveillance and public health infrastructure; areas that are crucial for crisis response are often neglected in low-resource settings ( ). Syria and Yemen’s consistent positioning in the fifth cluster reveals systemic public health weaknesses exacerbated by ongoing political and economic instability. Yemen’s PR score of just 0.8 in 2021 underscores the severe limitations in preventive health capabilities, leaving the country highly vulnerable to disease outbreaks. Similarly, Syria’s low DR and HS scores illustrate critical deficiencies in early detection and health-system capacity, which is consistent with studies noting that crisis-affected regions require a comprehensive approach that combines infrastructure development, workforce training, community engagement, and policy reform to achieve long-term health security ( ). 4.4.3 Spearman rank correlation results The near-perfect correlations between financial resource allocation indicators, specifically financing for health security initiatives (F1) and public healthcare spending per capita (F2), with overall health security performance scores (Ci) across WA countries, underscore that financial investment plays a critical role in bolstering health security. Higher levels of both targeted health security financing (F1) (0.98) and general public healthcare spending (F2) (0.97) were consistently associated with higher health security performance (F3). This relationship is particularly relevant for countries with substantial financial inputs, such as Armenia and Georgia, which show high health security scores, reflecting the effectiveness of financial investments in achieving improved preparedness and response capabilities. Conversely, countries with minimal or no spending in these categories, such as Syria and Yemen, exhibit some of the lowest health security scores, highlighting a likely resource gap in public health infrastructure that limits their resilience against health threats. The observed trends indicate shifting priorities in health security strategies tailored to resource levels and specific risks faced by different income groups. High-income and upper-middle-income countries continue to prioritize detection and reporting, emphasizing the early identification of health threats to support rapid responses. By 2021, low- and lower-middle-income countries are increasingly focusing on prevention as a top priority, reflecting a proactive approach in contexts where reactive capacities may be limited. The lower weighting for health systems across all groups suggests either resource limitations or strategic reallocation toward other critical health security areas, possibly in response to post-2019 health events. These evolving priorities underscore diverse approaches to health security, reflecting each income group’s unique challenges, resources, and public health goals. Furthermore, this study highlights considerable disparities in health resilience among countries shaped by income levels, resource availability, and political stability. High-income countries (HICs), such as Qatar and Georgia, improved their health security rankings by leveraging financial investments and strategic international partnerships, while conflict-affected, low-resource nations, namely Iraq, Yemen, and Syria, experienced stagnation or decline. These findings underscore the urgent need for a regional framework to address health security inequities, bridge the gap between high- and low-income nations, and foster sustainable public health resilience across the WA. The observed disparities align with the broader global patterns seen in low-income and politically unstable regions, where limitations in funding and infrastructure often constrain health security progress. Iraq’s mixed performance—with some improvements in detection and reporting (DR) but an overall cluster decline—illustrates the challenge of sustaining health system gains in volatile settings without consistent, comprehensive investment and policy commitment. Similarly, Yemen’s persistently low scores in prevention (PR) mirror challenges observed in sub-Saharan Africa, where under-resourced health systems struggle to prioritize preventive health measures, leading to high public health burdens and limited security capabilities ( ). The results indicated that the challenges faced by Iraq, Yemen, and Syria in health security are intricately linked to substantial resource allocation issues. For Iraq, the decline from the fourth to the fifth cluster, despite some advances in specific areas, such as detection, suggests that targeted investments without a holistic, integrated approach may be insufficient. Improvements in specific functions such as detection capabilities offer limited, temporary benefits if not supported by robust health infrastructure, workforce training, and preventive care, which are necessary for long-term resilience ( ). The persistently low PR and DR scores for Yemen and Syria emphasize the urgent need for foundational health infrastructure investments, particularly in crisis-prone regions where preventive measures are essential to mitigate disease spread. Research in crisis settings highlights that strengthening early detection and preventive health measures reduces the spread of communicable diseases and alleviates public health burden over time ( ). Allocating resources to these core health security areas, such as early intervention, surveillance, and effective disease management, can enhance health resilience and reduce vulnerability to future crises. The findings of the correlation analysis have important policy implications. First, they suggested that investments in health security financing (F1) and general healthcare spending (F2) are likely critical for achieving higher health security performance. Governments and international organizations aiming to enhance global health security might consider these financial indicators as key benchmarks for strengthening the health system. By strategically increasing F1 and F2, countries may improve their ability to meet international health security standards and enhance their response capabilities in line with the Joint External Evaluation (JEE) and Performance Verification System (PVS) recommendations. To mitigate the identified challenges, policy interventions that prioritize preventive care, detection, and overall health system resilience are crucial for countries facing stagnation or decline in health security. For Iraq, policy adjustments focused on integrating resources across crisis responses, infrastructure, and workforce development could promote a balanced health system and ensure sustainable improvement. For Yemen and Syria, persistently low PR and DR scores underscore the need for both regional and international support to establish a basic public health infrastructure. Regional cooperation can also standardize protocols, enhance healthcare workforce training, and align public health policies with international standards, thus enabling lower-resource countries to manage public health threats more effectively and reduce their vulnerability to crises. Based on the above analysis, the findings of this study provide practical directions for enhancing health security (HS) from a global perspective, emphasizing actionable pathways to advance health security goals. Policymakers, international organizations, and regional stakeholders must adopt targeted strategies to address disparities across income groups and performance clusters as follows: 1. Income-Specific Resource Allocation : High-income countries (HICs) like Qatar and the UAE should prioritize investments in advanced detection systems, rapid response mechanisms, and cross-border data-sharing platforms to maintain their leadership in health security. Upper-middle-income countries (UMCs) such as Georgia and Armenia require funding to modernize health infrastructure and manage risks associated with climate change and pandemics. For low- and lower-middle-income countries (LICs/LMCs), including Yemen and Syria, international aid should focus on developing preventive care networks and training community health workers for grassroots surveillance—essential measures to mitigate outbreak risks in resource-constrained settings. 2. Cluster-Specific Interventions : Tailored interventions can further enhance these priorities. High-performing HICs in Cluster 1, such as Qatar and Israel, should establish regional innovation hubs for vaccine development and digital health tools while sharing best practices, such as telemedicine frameworks, to support neighboring countries. Cluster 2 countries like Jordan and Saudi Arabia must address prevention gaps through mobile clinics and behavioral health campaigns, backed by public-private partnerships, while adopting interoperable health systems to improve detection capabilities. For conflict-affected nations in Clusters 4-5, such as Iraq and Yemen, a dual humanitarian-development approach is critical, combining emergency aid with long-term projects like laboratory reconstruction and epidemiologist training, as well as deploying community health workers for malnutrition screening and disease surveillance. Furthermore, from a sustainability perspective, the findings of this study also offer actionable pathways to advance the Sustainable Development Goals (SDGs)—particularly SDG 3, which focuses on ensuring healthy lives and well-being for all in Western Asia. By aligning health security priorities and strategies with SDG 3 targets, such as universal health coverage (UHC), epidemic preparedness, and health system resilience, policymakers can effectively address disparities while promoting equitable progress: 1. Prioritizing Strategic Resource Allocation : HICs like Qatar and the UAE should focus on investing in AI-driven surveillance and cross-border data-sharing platforms. UMCs such as Georgia and Armenia require funding for climate-resilient infrastructure and integrated disease surveillance. LICs and LMCs, including Yemen and Syria, urgently need international aid to establish primary healthcare networks and deploy community health workers (CHWs) for preventive care and outbreak detection. 2. Bridging Health System Gaps : Decision-makers in Western Asia should allocate resources towards bridging the divide between health system strengthening and measurable health security outcomes. By utilizing the identified health security priorities for the studied groups, investments can be directly aligned with SDG targets, enabling an evaluation of how targeted funding translates into tangible improvements. 3. Regional Leadership in Health Innovations : High-performing HICs, such as Qatar and Israel, can spearhead regional vaccine research hubs and share telemedicine frameworks. Moderate performers like Jordan and Saudi Arabia should expand mobile clinic access to reduce maternal and child mortality and implement interoperable health records. Conflict-affected nations in Clusters 4-5 should focus on rebuilding laboratories, training epidemiologists, and deploying CHWs for malnutrition screening and disease detection. The study has limitations that warrant consideration. The temporal scope, limited to 2019–2021, restricts insights into long-term trends or the sustained impact of health security interventions. Additionally, the analysis focuses narrowly on predefined GHSI indicators, overlooking contextual factors such as political instability, cultural practices, or broader Sustainable Development Goal (SDG) measures (e.g., poverty alleviation, clean water access, education equity) that may indirectly shape health security outcomes in Western Asia. Methodologically, the use of K-means clustering with fixed income-based categories risks oversimplifying complex regional disparities. Future research could address these gaps by extending the temporal framework to capture post-pandemic recovery trajectories, integrating mixed-method approaches to incorporate qualitative insights (e.g., stakeholder interviews), and expanding indicators to include region-specific variables like migration patterns, conflict intensity, and SDG progress metrics. Exploring interdependencies between SDG achievements (e.g., SDG 3 on health, SDG 6 on water sanitation) and health security outcomes could reveal synergistic pathways for policy interventions. Additionally, alternative clustering techniques (e.g., hierarchical models) and cross-regional comparisons could enhance methodological robustness and generalizability. Finally, causal analyses would clarify the mechanisms linking financial investments, SDG-aligned programs, and health security outcomes. These advancements would strengthen evidence-based policymaking and foster more equitable, context-sensitive health security strategies in Western Asia and beyond. This study provides a comprehensive assessment of health security performance across Western Asia (WA) from 2019 to 2021, revealing both progress and persistent vulnerabilities within the region. A significant finding from this study is the critical role of foundational health infrastructure, particularly in areas such as health system capacity (HS), detection and reporting (DR), rapid response (RR), and environmental risk management (RE). The consistently high weighting of the HS and DR indicators underscores their importance in achieving resilient health systems capable of mitigating public health threats. However, the increased emphasis on RR and RE in 2021 reflected an adaptive shift in health security priorities following the COVID-19 pandemic. This shift suggests that while stable health systems and robust detection mechanisms are essential, there is also a growing need for effective rapid response measures and environmental risk management to address immediate and evolving crises. The findings highlight significant disparities in health security shaped by variations in income levels, political stability, and resource availability. High-income countries (HICs), such as Qatar and Georgia, demonstrated the benefits of strategic investments and international partnerships, which enabled them to improve their health security rankings despite regional challenges. Conversely, conflict-affected, low-resource nations, such as Iraq, Yemen, and Syria, faced stagnation or even regression, illustrating the difficulties of sustaining health security gains without consistent resources and support. The cluster analysis illustrates that while some countries in WA managed incremental improvements in health security, others experienced setbacks, widening the gap between high- and low-performing nations. The decline of Iraq, Yemen, and Syria to lower clusters reveals the limitations of targeted short-term investments when not supported by a holistic and integrated approach. In these countries, challenges in resource allocation have hindered the capacity to develop comprehensive health systems. The low scores in prevention (PR) and detection/reporting (DR) highlight the urgent need for a foundational infrastructure that facilitates early detection and effective disease management. This trend aligns with global findings in similar low-resource settings, where limited preventive measures and weak surveillance systems increase the vulnerability to health crises. Our analysis also demonstrated a significant positive association between financial investments in health security and public healthcare spending and overall health security performance. The strong correlations suggest that increased funding in these areas could substantially enhance a country’s health security capabilities, thereby improving its resilience to public health threats. As global health risks continue to evolve, these findings advocate targeted financial investments as part of comprehensive health-security strategies. This study’s implications point to the necessity of a unified regional health security framework in WA that fosters equitable resource sharing, coordinated health strategies, and robust international partnerships. Through the consolidation of resources and expertise, Western Asian countries can address disparities in health resilience more effectively, particularly in nations facing significant economic or political challenges. Regional cooperation has the potential to standardize health protocols, enhance healthcare workforce training, and align policies with international standards, thereby strengthening health security across the region. Furthermore, the findings of this study underscore the critical need for coordinated efforts to bridge gaps in health security and to position the region to foster a robust and sustainable public health landscape. Glossary CoCoSo (The Combined Compromise Solution): A MCDM ranking model. D-CRITIC (Distance Correlation-based Criteria Importance through Inter-criteria Correlation): A method for determining the relative importance of criteria. Global Health Security Index (GHSI) : A comprehensive assessment tool that measures health security capabilities across countries, focusing on various indicators that contribute to national and global health security. Multi-Criteria Decision Making (MCDM) : A set of methods and techniques used to evaluate and prioritize multiple conflicting criteria in decision-making processes. WA (Western Asia) A geographical region in Asia. Ethics and consent No Ethical approval or consent needed. CoCoSo (The Combined Compromise Solution): A MCDM ranking model. D-CRITIC (Distance Correlation-based Criteria Importance through Inter-criteria Correlation): A method for determining the relative importance of criteria. Global Health Security Index (GHSI) : A comprehensive assessment tool that measures health security capabilities across countries, focusing on various indicators that contribute to national and global health security. Multi-Criteria Decision Making (MCDM) : A set of methods and techniques used to evaluate and prioritize multiple conflicting criteria in decision-making processes. WA (Western Asia) A geographical region in Asia. No Ethical approval or consent needed. Adel A. Nasser: Conceptualization, methodology, formal analysis, software, writing—original draft preparation; Abed Saif Ahmed Alghawli: Conceptualization, methodology, formal analysis, funding acquisition, and writing—original draft preparation; S. Saleh: Data curation, validation, investigation, resources, writing—review and editing, supervision; Amani A. K. Essayed: Visualization, software, and writing—review and editing. During the preparation of this work the author(s) used [paperpal, quillbot, aistudio, and ChatGPT] for language refinement and structure. After using this tools, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.
Summary of emergency treatment experience of acute severe urethral trauma in pre-pubertal girls
b9a123dd-2a7b-4dc4-822b-d314d75ef7cd
11907945
Musculoskeletal System[mh]
Urethral trauma in female patients is rare and only sporadically reported in the literature. Most of these patients have blunt injuries , the most common cause of which is external trauma resulting in pelvic fractures. An estimated 4.6–6.0% of pelvic fractures in female patients are associated with urethral injury . Children’s bones are not yet fully developed; they are characterized by a high cartilage ratio and higher elasticity than adults’ bones, allowing them to absorb more energy in the event of injury . Pelvic fractures with pelvic ring rupture, especially Malgaigne fractures and straddle fractures, are associated with a higher incidence of urethral injury . These unstable fractures are more common in children than in adults. Therefore, the incidence rate of pelvic fracture and urethral injury is also higher in children than in female adults . The clinical symptoms of female urethral trauma are relatively complex. In addition to urethral damage that affects urination, reports in the literature indicate that 75–87% of cases are complicated by vaginal tears and that 33% of cases are complicated by rectal injuries . Therefore, urethral injuries in female patients exhibit different clinical characteristics due to their combination with other types of injuries. Because of the low incidence rate and complex clinical manifestations of these injuries, the diagnosis can be easily missed if clinicians are not adequately experienced and alert. It is reported in the literature that the diagnostic error rate of adult female patients in the emergency department is 40% ; therefore, further clinical experience is needed to strengthen our understanding of such injuries and improve the diagnostic rate. The European Association of Urology recommends early repair (within 7 days) of urethral injuries in female patients rather than delayed repair or urethral realignment . Although the treatment of urethral injuries in girls utilizes some of the same treatment strategies as in adults with successful outcomes described in numerous reports [ – ], there is currently no consensus on the optimal treatment method for urethral injuries in girls. Controversy regarding the optimal timing and method of surgical management has been ongoing for decades. Some doctors advocate for immediate repair of pelvic fractures causing urethral and vaginal injuries because the natural evolution of these injuries often leads to complete urethral occlusion, high-level urethral vaginal fistula, and varying degrees of vaginal stenosis. Some doctors suggest that the end of the urethra be treated with urethral realignment to avoid tissue dissection or suturing in the area of trauma . In addition, some urologists suggest that early bladder fistula drainage and delayed urethral repair may be the best treatment options because they can facilitate hematoma absorption and reduce tissue inflammation . The literature in China and abroad contains only sporadic reports on the emergency management of urethral injuries in girls, and large-scale studies of emergency diagnosis and initial treatment in such patients are lacking. Long-term urinary dysfunction in these patients has not been evaluated using standardized questionnaires. In the present study, we retrospectively analyzed girls with urethral injury admitted during the past 20 years with a focus on their clinical and follow-up characteristics in an effort to summarize the principles of emergency management for such patients. Clinical data From April 2003 to April 2023, a total of six girls who underwent urgency surgery due to severe urethral injury were treated in our hospital, and the last one was treated in 2014. All patients were under the age of 10 at the time of injury. We retrospectively followed up the clinical data of these patients. Preoperative preparation All patients underwent routine clinical trauma evaluations, including a medical history, physical examination, routine blood examination, blood biochemistry (including liver and kidney function), chest X-ray or chest computed tomography, and abdominal ultrasound or computed tomography. All patients were evaluated for vital signs in the emergency department and corrected if there was shock. Diagnostic catheterization was also attempted in the emergency department. If catheterization was impossible or difficult, urethral injury was considered. According to the patient’s overall injury situation, specialists from relevant disciplines were consulted for evaluation, and a suitable treatment plan was selected as soon as possible after this consultation.Pelvic fixation was typically performed prior to urethral reconstruction to ensure optimal surgical positioning. Urologic surgical treatment The following three urologic surgeries commonly used in the urgency setting were performed in this study as necessary. Cystoscopic examination The patient underwent general anesthesia and was placed in the supine with lithotomy position. For patients with pelvic fractures, even if they have received pelvic fixation surgery, their bilateral thigh abduction may be limited.Within the safe range, try to ensure that the perineum can be exposed. After routine disinfection and draping, the cystoscope(KARL STORZ SE & Co. KG, rigid, 8Fr or 10Fr, 0 o ) was advanced into the urethra or vagina through their external openings to examine their condition. If the patient had already undergone a cystostomy, the conditions of the bladder and urethra were endoscopically examined through the ostomy port. Suprapubic bladder fistular surgery The patient underwent general anesthesia and was placed in the supine position. After routine disinfection and draping, a 1.5- to 2.0-cm horizontal incision was made above the pubic symphysis. The skin and subcutaneous tissues were incised, and the linea alba was vertically cut. The rectus abdominis muscle was retracted to both sides and bluntly separated until the extraperitoneal space could be accessed. The anterior wall of the bladder was exposed and incised, and urine was aspirated and removed from the anterior wall of the bladder as necessary. A double layer of purse-string sutures was placed on the anterior wall of the bladder, with an inner diameter of approximately 1 cm. After suturing, the anterior wall of the bladder was opened, a balloon catheter was inserted, water was injected into the balloon, and the purse-string sutures were tightened. Hemostasis was ensured, and the wound was closed with interrupted sutures in layers. Urethral anastomosis and vaginal repair with a combined transabdominal and perineal approach The patient underwent general anesthesia and was placed in the supine with lithotomy position. After routine disinfection and draping, a 1-cm horizontal incision was made above the pubic symphysis, and the skin and subcutaneous tissue were incised. The linea alba was vertically cut, and the rectus abdominis muscle was retracted to both sides so that the anterior wall of the bladder could be lifted out and opened. If the site of urethral rupture was located at the distal end of the urethra, the urethral and vaginal openings were often retracted to the pelvic cavity after injury. The doctors could use their fingers to push the bladder neck and urethral opening out toward the perineum. At this time, the urethral and/or vaginal ends could be seen in the perineum. The urethra could be pulled to the vicinity of the original urethral opening below the clitoris and sutured in an interrupted pattern to form a new urethral opening. If a tear was present in the anterior and posterior walls of the vagina, an attempt was made to suture and repair it. At the same time, the anterior and posterior walls of the vagina were pulled out and fixed with the surrounding skin near the original vagina to form a new vaginal opening. If the urethral rupture was located at the proximal end of the urethra, the bladder neck, proximal urethra, and distal urethra were freed, and the distal and proximal ends of the ruptured urethra were anastomosed. During surgery, the posterior wall of the vagina and rectum were closely examined. If the anterior wall of the rectum was damaged, it was repaired together with a general surgeon, and a colostomy was performed if necessary. Postoperative treatment After surgery, the patient’s vital signs were monitored, and antibiotics were used to prevent infection as necessary. Patients who had undergone simple bladder fistular surgery underwent wound dressing changes on postoperative days 3 and 7. The bladder fistula tube was left in place until the urethral repair surgery was completed. Other patients had drainage strip placed under the pubic symphysis for drainage for 3 days. Oil gauze was used to control vaginal bleeding and was removed within 3 to 5 days. Balloon catheters were left in the urethra for 3 weeks, with an additional indwelling balloon catheter placed above the pubic symphysis as a bladder fistula. Generally, after the urethral catheter was removed, antegrade cystourethrography was performed to confirm urinary tract patency, followed by removal of the bladder fistular tube. Perineal wound management protocol included initial pressure dressing application for 3–5 days to achieve hemostasis. Standard aseptic technique using iodine solution was followed for infection prevention. Two adjunctive therapies were employed to enhance wound healing: Infrared Therapy (Zhoulin Spectrometer, Shenzhen). Frequency: 50 Hz. Power output: 110 W. Spectral range: 2–25 μm. Treatment distance: 20–30 cm from skin surface. Skin temperature maintenance: 38–46 °C. Treatment duration: 15–20 min/session. Frequency: Once daily. Treatment course: 10 sessions. Semiconductor Laser Biostimulation (YongShunKangTai Technology, Beijing). Diode configuration: 46-beam array. Wavelength: 650 nm. Power output: 25 W. Frequency range: 2.5 Hz-20 kHz. Power density: 75mW/cm². Treatment distance: 1 cm. Spot diameter: 10 cm. Treatment duration: 5 min/session. Frequency: Once daily. Treatment course: 10 sessions. All urethral and vaginal repairs were performed using absorbable sutures, obviating the need for subsequent suture removal. Regular follow-up included urological ultrasound, retrograde cystourethrography, and urine flow rate examination were performed three months, half a year, and one year after the surgery. If necessary, vaginal ultrasound was also performed. Follow-up Medical records Follow-up information collected from the medical records included preoperative and postoperative clinical manifestations and imaging examination reports. All postoperative surgical complications were recorded, including hematoma formation, postoperative infection, delayed wound healing, difficulty urinating, urinary incontinence, vaginal fluid accumulation, whether repeat surgery or multiple surgeries were performed, and the specific surgical approach used. If the chief complaint was difficulty urinating and the maximum urine flow rate was < 10 mL/s, the patient was considered to have difficulty urinating. Daily use of more than one urine pad indicated urinary incontinence, and daily use of one or no pads indicated good urinary control. Telephone or mail Follow-up of patient-reported outcomes was conducted by contacting the patients by telephone or mailing them a survey questionnaire, including the Urogenital Distress Inventory short form (UDI-6) and the Incontinence Impact Questionnaire-7 (IIQ-7). According to the scores of these scales, urinary dysfunction was classified as normal (0–3), mild (4–7), moderate (8–11), and severe (≥ 12). Each patient’s menstrual condition was also monitored during follow-up. From April 2003 to April 2023, a total of six girls who underwent urgency surgery due to severe urethral injury were treated in our hospital, and the last one was treated in 2014. All patients were under the age of 10 at the time of injury. We retrospectively followed up the clinical data of these patients. All patients underwent routine clinical trauma evaluations, including a medical history, physical examination, routine blood examination, blood biochemistry (including liver and kidney function), chest X-ray or chest computed tomography, and abdominal ultrasound or computed tomography. All patients were evaluated for vital signs in the emergency department and corrected if there was shock. Diagnostic catheterization was also attempted in the emergency department. If catheterization was impossible or difficult, urethral injury was considered. According to the patient’s overall injury situation, specialists from relevant disciplines were consulted for evaluation, and a suitable treatment plan was selected as soon as possible after this consultation.Pelvic fixation was typically performed prior to urethral reconstruction to ensure optimal surgical positioning. The following three urologic surgeries commonly used in the urgency setting were performed in this study as necessary. The patient underwent general anesthesia and was placed in the supine with lithotomy position. For patients with pelvic fractures, even if they have received pelvic fixation surgery, their bilateral thigh abduction may be limited.Within the safe range, try to ensure that the perineum can be exposed. After routine disinfection and draping, the cystoscope(KARL STORZ SE & Co. KG, rigid, 8Fr or 10Fr, 0 o ) was advanced into the urethra or vagina through their external openings to examine their condition. If the patient had already undergone a cystostomy, the conditions of the bladder and urethra were endoscopically examined through the ostomy port. The patient underwent general anesthesia and was placed in the supine position. After routine disinfection and draping, a 1.5- to 2.0-cm horizontal incision was made above the pubic symphysis. The skin and subcutaneous tissues were incised, and the linea alba was vertically cut. The rectus abdominis muscle was retracted to both sides and bluntly separated until the extraperitoneal space could be accessed. The anterior wall of the bladder was exposed and incised, and urine was aspirated and removed from the anterior wall of the bladder as necessary. A double layer of purse-string sutures was placed on the anterior wall of the bladder, with an inner diameter of approximately 1 cm. After suturing, the anterior wall of the bladder was opened, a balloon catheter was inserted, water was injected into the balloon, and the purse-string sutures were tightened. Hemostasis was ensured, and the wound was closed with interrupted sutures in layers. The patient underwent general anesthesia and was placed in the supine with lithotomy position. After routine disinfection and draping, a 1-cm horizontal incision was made above the pubic symphysis, and the skin and subcutaneous tissue were incised. The linea alba was vertically cut, and the rectus abdominis muscle was retracted to both sides so that the anterior wall of the bladder could be lifted out and opened. If the site of urethral rupture was located at the distal end of the urethra, the urethral and vaginal openings were often retracted to the pelvic cavity after injury. The doctors could use their fingers to push the bladder neck and urethral opening out toward the perineum. At this time, the urethral and/or vaginal ends could be seen in the perineum. The urethra could be pulled to the vicinity of the original urethral opening below the clitoris and sutured in an interrupted pattern to form a new urethral opening. If a tear was present in the anterior and posterior walls of the vagina, an attempt was made to suture and repair it. At the same time, the anterior and posterior walls of the vagina were pulled out and fixed with the surrounding skin near the original vagina to form a new vaginal opening. If the urethral rupture was located at the proximal end of the urethra, the bladder neck, proximal urethra, and distal urethra were freed, and the distal and proximal ends of the ruptured urethra were anastomosed. During surgery, the posterior wall of the vagina and rectum were closely examined. If the anterior wall of the rectum was damaged, it was repaired together with a general surgeon, and a colostomy was performed if necessary. After surgery, the patient’s vital signs were monitored, and antibiotics were used to prevent infection as necessary. Patients who had undergone simple bladder fistular surgery underwent wound dressing changes on postoperative days 3 and 7. The bladder fistula tube was left in place until the urethral repair surgery was completed. Other patients had drainage strip placed under the pubic symphysis for drainage for 3 days. Oil gauze was used to control vaginal bleeding and was removed within 3 to 5 days. Balloon catheters were left in the urethra for 3 weeks, with an additional indwelling balloon catheter placed above the pubic symphysis as a bladder fistula. Generally, after the urethral catheter was removed, antegrade cystourethrography was performed to confirm urinary tract patency, followed by removal of the bladder fistular tube. Perineal wound management protocol included initial pressure dressing application for 3–5 days to achieve hemostasis. Standard aseptic technique using iodine solution was followed for infection prevention. Two adjunctive therapies were employed to enhance wound healing: Infrared Therapy (Zhoulin Spectrometer, Shenzhen). Frequency: 50 Hz. Power output: 110 W. Spectral range: 2–25 μm. Treatment distance: 20–30 cm from skin surface. Skin temperature maintenance: 38–46 °C. Treatment duration: 15–20 min/session. Frequency: Once daily. Treatment course: 10 sessions. Semiconductor Laser Biostimulation (YongShunKangTai Technology, Beijing). Diode configuration: 46-beam array. Wavelength: 650 nm. Power output: 25 W. Frequency range: 2.5 Hz-20 kHz. Power density: 75mW/cm². Treatment distance: 1 cm. Spot diameter: 10 cm. Treatment duration: 5 min/session. Frequency: Once daily. Treatment course: 10 sessions. All urethral and vaginal repairs were performed using absorbable sutures, obviating the need for subsequent suture removal. Regular follow-up included urological ultrasound, retrograde cystourethrography, and urine flow rate examination were performed three months, half a year, and one year after the surgery. If necessary, vaginal ultrasound was also performed. Medical records Follow-up information collected from the medical records included preoperative and postoperative clinical manifestations and imaging examination reports. All postoperative surgical complications were recorded, including hematoma formation, postoperative infection, delayed wound healing, difficulty urinating, urinary incontinence, vaginal fluid accumulation, whether repeat surgery or multiple surgeries were performed, and the specific surgical approach used. If the chief complaint was difficulty urinating and the maximum urine flow rate was < 10 mL/s, the patient was considered to have difficulty urinating. Daily use of more than one urine pad indicated urinary incontinence, and daily use of one or no pads indicated good urinary control. Telephone or mail Follow-up of patient-reported outcomes was conducted by contacting the patients by telephone or mailing them a survey questionnaire, including the Urogenital Distress Inventory short form (UDI-6) and the Incontinence Impact Questionnaire-7 (IIQ-7). According to the scores of these scales, urinary dysfunction was classified as normal (0–3), mild (4–7), moderate (8–11), and severe (≥ 12). Each patient’s menstrual condition was also monitored during follow-up. Follow-up information collected from the medical records included preoperative and postoperative clinical manifestations and imaging examination reports. All postoperative surgical complications were recorded, including hematoma formation, postoperative infection, delayed wound healing, difficulty urinating, urinary incontinence, vaginal fluid accumulation, whether repeat surgery or multiple surgeries were performed, and the specific surgical approach used. If the chief complaint was difficulty urinating and the maximum urine flow rate was < 10 mL/s, the patient was considered to have difficulty urinating. Daily use of more than one urine pad indicated urinary incontinence, and daily use of one or no pads indicated good urinary control. Follow-up of patient-reported outcomes was conducted by contacting the patients by telephone or mailing them a survey questionnaire, including the Urogenital Distress Inventory short form (UDI-6) and the Incontinence Impact Questionnaire-7 (IIQ-7). According to the scores of these scales, urinary dysfunction was classified as normal (0–3), mild (4–7), moderate (8–11), and severe (≥ 12). Each patient’s menstrual condition was also monitored during follow-up. General information This study involved six patients with complete medical records who were successfully followed up by telephone or letter. Their mean age of injury was 5.18 ± 3.13 years (range, 1.8–9.8 years), their mean follow-up time was 96.83 ± 61.32 months (range, 101–239 months), the mean surgical duration was 2.10 ± 1.08 h (range, 0.5–3.5 h), the mean blood loss was 38.67 ± 28.75 mL (range, 2–80 mL), the mean hospital stay was 38.17 ± 32.05 days (range, 3–95 days), and the mean surgical time after injury was 5.71 ± 5.15 days (range, 5 h–14 days). Clinical manifestations One patient’s lower abdomen and pelvis were injured by falling heavy objects.The other five were injured in car accidents and all of them were pedestrians at the time. One patient showed perineal bleeding and urinary incontinence after injury, and the other five patients showed perineal bleeding and inability to urinate. Treatment at other hospitals after injury Because our hospital is a tertiary medical center, all six patients were transferred from other hospitals. When the patients arrived at the emergency department of our hospital, they had all undergone simple treatments before transfer: one patient had undergone external genital tear repair and suprapubic cystostomy, one had received a blood transfusion to correct shock followed by bladder fistula repair and external fixation of a pelvic fracture, one had received a blood transfusion for stabilization of the hemoglobin concentration, one had been treated with wound debridement and bandaging and underwent pelvic X-ray examination to confirm a pelvic fracture, one had undergone cystostomy during which a urethral vaginal fistula was discovered, and one had undergone indwelling catheterization for management of perineal bleeding and urinary incontinence after injury. Comorbidities All patients had concurrent pelvic fractures. All patients also had concurrent vaginal injuries (anterior and posterior vaginal wall tears in four patients and anterior-only vaginal wall tears in two patients). Only one patient with concurrent rectal injury was found to have a rectovaginal fistula during surgery, which was repaired at the same time. Two patients’ injuries were complicated by closed abdominal injury, multiple fractures, extensive skin avulsion injury, and hemorrhagic shock, resulting in longer hospital stays of 48 days and 95 days, respectively. Postoperative complications No patients developed serious complications such as hematoma, severe infection, or pulmonary embolism. Early postoperative complications (within 3 weeks after surgery) One patient developed bleeding from the wound, which improved after application of pressure bandaging. One patient developed a wound infection, which improved after dressing changes, topical medication, and local physical therapy. Late postoperative complications (more than 3 weeks after surgery) Three patients (Case No3, 5, 6)achieved good postoperative urinary control and no symptoms of urinary incontinence. From 3 to 6 months after urethral and vaginal repair surgery, urinary bladder urethrography showed an unobstructed urinary tract and no urethral vaginal fistula. Urinary tract ultrasound indicated normal upper urinary tract morphology. The maximum average urine flow rate was > 10 mL/s. All patients’ menstrual condition was normal; two of them already had children. One patient developed postoperative urinary incontinence. She underwent emergency urethral anastomosis and vaginal repair, and cystoscopy was performed 1 year postoperatively because of urinary incontinence and dripping. During the procedure, a 13 F cystoscope could be inserted from the urethral opening. The bladder mucosa was smooth, the bladder neck could be contracted and closed, and the urethra was approximately 3 cm long. No urethral–vaginal fistula was found, and no fistula was observed during vaginal examination. After 3 years, because of continued urinary incontinence, a second cystoscopy was performed followed by bilateral ureteral reimplantation and bladder neck tightening surgery. The patient recovered well. At the time of this writing, the patient was 13.8 years old with smooth urination and no urinary incontinence. She had experienced menarche and had regular menstruation. Her scores were 3 points on the UDI-6 Short Form and 1 point on the IIQ-7 Short Form (see Table , Case No.1). One patient experienced difficulty urinating 2 months after emergency urethral anastomosis and vaginal repair. Urinary cystourethrography revealed bladder obstruction and proximal urethral stricture. Cystoscopy and urethral dilation were then performed. During the operation, a cystoscope was inserted into the vagina and a stenosis was found; however, it was not treated because of the patient’s young age. More than 4 years later, the patient underwent vaginal dilation twice in another hospital because of poor drainage during menarche. At the time of this writing, the patient was 19.7 years of age and had normal urination, no urinary incontinence, and smooth menstruation. Her scores were 3 points on the UDI-6 Short Form and 2 points on the IIQ-7 Short Form (see Table , Case No.2). UDI-6 and IIQ-7 short form scores The six patients’ UDI-6 and IIQ-7 scores are shown in Table . According to these scores, the urinary dysfunction was classified as normal (0–3), mild (4–7), moderate (8–11), and severe (≥ 12)(see Table ). This study involved six patients with complete medical records who were successfully followed up by telephone or letter. Their mean age of injury was 5.18 ± 3.13 years (range, 1.8–9.8 years), their mean follow-up time was 96.83 ± 61.32 months (range, 101–239 months), the mean surgical duration was 2.10 ± 1.08 h (range, 0.5–3.5 h), the mean blood loss was 38.67 ± 28.75 mL (range, 2–80 mL), the mean hospital stay was 38.17 ± 32.05 days (range, 3–95 days), and the mean surgical time after injury was 5.71 ± 5.15 days (range, 5 h–14 days). One patient’s lower abdomen and pelvis were injured by falling heavy objects.The other five were injured in car accidents and all of them were pedestrians at the time. One patient showed perineal bleeding and urinary incontinence after injury, and the other five patients showed perineal bleeding and inability to urinate. Because our hospital is a tertiary medical center, all six patients were transferred from other hospitals. When the patients arrived at the emergency department of our hospital, they had all undergone simple treatments before transfer: one patient had undergone external genital tear repair and suprapubic cystostomy, one had received a blood transfusion to correct shock followed by bladder fistula repair and external fixation of a pelvic fracture, one had received a blood transfusion for stabilization of the hemoglobin concentration, one had been treated with wound debridement and bandaging and underwent pelvic X-ray examination to confirm a pelvic fracture, one had undergone cystostomy during which a urethral vaginal fistula was discovered, and one had undergone indwelling catheterization for management of perineal bleeding and urinary incontinence after injury. All patients had concurrent pelvic fractures. All patients also had concurrent vaginal injuries (anterior and posterior vaginal wall tears in four patients and anterior-only vaginal wall tears in two patients). Only one patient with concurrent rectal injury was found to have a rectovaginal fistula during surgery, which was repaired at the same time. Two patients’ injuries were complicated by closed abdominal injury, multiple fractures, extensive skin avulsion injury, and hemorrhagic shock, resulting in longer hospital stays of 48 days and 95 days, respectively. No patients developed serious complications such as hematoma, severe infection, or pulmonary embolism. One patient developed bleeding from the wound, which improved after application of pressure bandaging. One patient developed a wound infection, which improved after dressing changes, topical medication, and local physical therapy. Three patients (Case No3, 5, 6)achieved good postoperative urinary control and no symptoms of urinary incontinence. From 3 to 6 months after urethral and vaginal repair surgery, urinary bladder urethrography showed an unobstructed urinary tract and no urethral vaginal fistula. Urinary tract ultrasound indicated normal upper urinary tract morphology. The maximum average urine flow rate was > 10 mL/s. All patients’ menstrual condition was normal; two of them already had children. One patient developed postoperative urinary incontinence. She underwent emergency urethral anastomosis and vaginal repair, and cystoscopy was performed 1 year postoperatively because of urinary incontinence and dripping. During the procedure, a 13 F cystoscope could be inserted from the urethral opening. The bladder mucosa was smooth, the bladder neck could be contracted and closed, and the urethra was approximately 3 cm long. No urethral–vaginal fistula was found, and no fistula was observed during vaginal examination. After 3 years, because of continued urinary incontinence, a second cystoscopy was performed followed by bilateral ureteral reimplantation and bladder neck tightening surgery. The patient recovered well. At the time of this writing, the patient was 13.8 years old with smooth urination and no urinary incontinence. She had experienced menarche and had regular menstruation. Her scores were 3 points on the UDI-6 Short Form and 1 point on the IIQ-7 Short Form (see Table , Case No.1). One patient experienced difficulty urinating 2 months after emergency urethral anastomosis and vaginal repair. Urinary cystourethrography revealed bladder obstruction and proximal urethral stricture. Cystoscopy and urethral dilation were then performed. During the operation, a cystoscope was inserted into the vagina and a stenosis was found; however, it was not treated because of the patient’s young age. More than 4 years later, the patient underwent vaginal dilation twice in another hospital because of poor drainage during menarche. At the time of this writing, the patient was 19.7 years of age and had normal urination, no urinary incontinence, and smooth menstruation. Her scores were 3 points on the UDI-6 Short Form and 2 points on the IIQ-7 Short Form (see Table , Case No.2). The six patients’ UDI-6 and IIQ-7 scores are shown in Table . According to these scores, the urinary dysfunction was classified as normal (0–3), mild (4–7), moderate (8–11), and severe (≥ 12)(see Table ). Low incidence rate of urethral trauma in girls As the largest tertiary medical center for children in North China, our hospital has treated only 6 female patients with acute severe urethral injuries during the past 20 years. This lower incidence rate in girls aligns with other reports in the literature . Since 2014, our hospital has not treated any new cases of this type, suggesting that increased awareness of traffic safety and enhanced child safety measures may be reducing the number of such injuries. In addition, most patients involved in vehicle accidents are pedestrians hit by motor vehicles rather than passengers. With the development of the social economy and urbanization, more children are traveling inside cars rather than outside, which may also contribute to the decreased incidence of these injuries. Causes of injury and comorbidities The injuries in five (83%) of the six patients were caused by car accidents, and all patients had both pelvic fractures and vaginal injuries. One patient also had a rectal injury. These ratios are close to those reported in the literature . Urethral injuries in girls are often caused by blunt trauma and accompanied by pelvic fractures and injuries to other organs, and the mortality rate of pelvic fractures is reportedly as high as 21.6% . Therefore, emergency rescue and resuscitation are very important for all patients. In the present study, we prioritized correction of hemorrhagic shock and stabilization of the patient’s vital signs as necessary. A atypical clinical manifestations and risk of misdiagnosis Five of the six patients presented with a large amount of fresh blood flowing from the perineum and the inability to urinate, which is consistent with common clinical symptoms after acute urethral injury. Notably, however, some patients who present with such injuries may have a more subtle onset of symptoms, and emergency doctors may miss the diagnosis. In a special case of our study, the patient developed perineal bleeding and urinary incontinence after injury. The patient was successfully treated by indwelling catheterization at an external hospital, resulting in clear urine. After transfer to our hospital, due to the unobstructed drainage from the indwelling catheter and clear urine, the emergency doctor focused on managing the vital signs and pelvic fractures, neglecting a thorough examination of the urethra and vagina. Two weeks after injury, the patient still experienced urinary incontinence after the catheter was removed. Following a consultation with a urologist, urethral and vaginal injuries were suspected. Cystoscopy revealed a urethral–vaginal fistula, and a suprapubic cystostomy was performed. One year later, the patient returned to the hospital for a second-stage repair surgery. According to literature reports, traumatic pelvic fractures in female patients may lead to simple urethral contusions as well as partial or complete transverse or longitudinal urethral injuries. Because of the extremely rare occurrence of urethral injuries in girls with traumatic pelvic fractures, which are often accompanied by other visceral injuries, doctors may not be sufficiently vigilant or may be preoccupied with managing other injuries. This type of injury, especially longitudinal urethral injuries, can easily be overlooked initially . Most patients with urethral injuries have tears, where either the normal urethral opening is not visible at the perineum or a catheter cannot be inserted into the bladder through the urethra. This is one of the indicators used by emergency doctors to confirm urethral injuries. Longitudinal urethral tears may be less severe injuries because patients can often urinate on their own and the catheter can pass smoothly through the urethra into the bladder, making these injuries easy to overlook. This case reminds us that any young female patient with combinations of vaginal bleeding, pelvic fractures, or sacral spinal injuries should be carefully evaluated for urethral or vaginal injuries. Even if the catheter can be smoothly inserted into the bladder through the urethra, vigilance cannot be relaxed. Diagnosis without urination or cystourethrography: use of cystoscopy (although not suitable for all patients) Urinary bladder urethrography is an essential examination for the diagnosis of urethral trauma in male patients. However, the diagnostic reliability of imaging for urethral injuries in female patients has always been controversial. According to literature reports, only half of such cases are correctly diagnosed through cystography . Most urethral injuries in female patients are diagnosed through cystoscopy under anesthesia . In the present study, none of the patients underwent urinary bladder urethrography examination. Four patients had severe perineal tears with no visibility of the normal urethral and vaginal openings, making it impossible to insert a urinary catheter. The diagnosis of urethral and vaginal injury was very clear. The urological ultrasound of all six patients showed no bladder rupture and no obvious abnormalities in the upper urinary tract. Additionally, the patient had pelvic fractures and frequent movement was not advisable, so further imaging was not performed. Two other patients were diagnosed with urethrovaginal injury during cystoscopy under general anesthesia. Therefore, we believe that if physical examination and catheterization can confirm the presence of urethral injury in girls, the diagnosis can be made without relying on urinary bladder urethrography and cystoscopy. For patients suspected to have urethral injury during physical examination, urethral endoscopy under general anesthesia can be performed to confirm the diagnosis. For suspicious patients, cystoscopy is recommended if conditions permit. If the perineal tear is serious and the structure is disordered, it is difficult for the doctor to find the broken end of the external urethral orifice at this time. The bladder can be opened through the suprapubic incision first, and the urethra can be observed by cystoscopy through the internal orifice of the urethra. Cystoscopy can also enter the vagina to check vaginal and uterine injuries. Urethral rupture can occur at any level along the urethral course. Cystoscopy serves as a valuable diagnostic tool for precise lesion localization. In our series, Cases No. 4–6 underwent cystoscopic evaluation, revealing distal urethral ruptures that preserved the continence mechanism. Cases No. 1–3 did not receive cystoscopy as the urethral ends were visually identified in the perineum, suggesting no immediate need for further assessment. However, Case No. 1 subsequently developed urinary incontinence. Postoperative cystoscopy revealed no urethral fistula, prompting bladder neck tightening surgery. Given the bladder neck’s critical role in urinary continence and retrospective analysis of this case, we hypothesize that initial injury may have involved the bladder neck, which was not adequately assessed during primary repair.Based on these findings, we recommend routine cystoscopic evaluation for urethral trauma girls when feasible, as it provides essential information regarding:1.Precise lesion localization; 2.Injury extent and severity; 3.Bladder neck integrity assessment.(Fig. ). Our experience in urgency surgical management The management of urethral injuries in girls is relatively complex, and there is currently no clear guideline. Whether to immediately repair the urethra or delay its repair until after life-threatening trauma has been treated and vital signs have been stabilized has long been controversial. Five of the six patients in our study underwent urethral and vaginal repair surgery within 7 days after injury, and one patient also underwent rectal repair surgery. Our experience highlights the following three points. First, if the patient’s condition permits, emergency urethral and vaginal repair surgery can be safely performed. If the patient’s hemodynamics are stable, emergency urethral and vaginal repair surgery can be considered. If necessary, reduction and fixation of pelvic fractures should be completed in advance. This can restore partially twisted anatomical structures, making urethral surgery easier and helping to prevent serious sequelae in the lower urinary and reproductive tracts . If the patient’s vital signs are unstable and the fracture is accompanied by severe clinical symptoms or related injuries, immediate suture repair is extremely difficult. Therefore, performing a pubic cystostomy and secondary urethral reconstruction is ideal in such cases. Second, urgency urethrovaginal repair surgery is associated with certain difficulties and requires experienced pediatric urologists. In our study, five patients underwent emergency repair surgeries for the urethra and vagina performed by surgeons who had worked in pediatric urology for more than 15 years. They had treated multiple cases of old urethral trauma in girls and urethral trauma in boys. They were very familiar with the pelvic anatomy and surgical approach and had rich experience in urethral anastomosis. Third, emergency repair surgery is beneficial for detecting concurrent injuries. In the present study, one patient was found to have a tear in the anterior wall of the rectum during surgery, and rectal repair was performed simultaneously. Postoperative complications Postoperative complications following acute urethral injury repair are generally infrequent and mild. In our series, two cases of wound bleeding and infection resolved promptly with symptomatic management. To enhance wound healing, we employed infrared and semiconductor laser therapy. Two patients undergoing infrared therapy achieved satisfactory recovery without complications, including vaginal stenosis. While existing literature questions the efficacy of far-infrared irradiation for perineal wound healing , we observed reduced wound exudation, potentially attributable to localized thermal effects. Current evidence supports Low-Level Laser Therapy (LLLT) as a safe and effective modality for wound healing enhancement . However, given our limited case series, these findings warrant further clinical investigation to establish treatment efficacy. Among the five patients who underwent urgency repair surgery, two developed Clavien–Dindo grade IIIb complications after surgery, necessitating endoscopic or surgical treatment under general anesthesia. The success rate was 60%, which is lower than that reported in the literature. We considered two reasons for this lower success rate. First, the success rate reported in the literature primarily pertains to adult female patients, who differ from pediatric patients. Second, because of the rarity of acute urethral injuries, our management experience was limited. UDI-6 and IIQ-7 scoring The current gold standard for diagnosing urinary incontinence is urodynamic examination, which is an invasive examination that is difficult for pediatric patients to tolerate and challenging for parents to accept. Patient-reported outcome scales mainly evaluate patients’ subjective feelings, conscious symptoms, and satisfaction with treatment, providing an effective supplement to clinical objective examinations. In the present study, we selected the two most commonly used questionnaires for patient follow-up: the IIQ-7 and the UDI-6. The IIQ-7 has good structural validity, contains fewer questions, and is convenient for patients to answer. It is currently one of the most widely used urinary incontinence quality-of-life scales in international research on pelvic floor dysfunction. The UDI-6 is used to evaluate the degree of distress caused by lower urinary tract dysfunction and genital prolapse symptoms in female patients. Higher scores are associated with more obvious urinary incontinence symptoms. The UDI-6 is currently the most widely used questionnaire to evaluate the degree of distress caused by urinary incontinence symptoms. Using this scale, urinary dysfunction is classified as normal (0–3), mild (4–7), moderate (8–11), and severe (≥ 12). In the present study, the youngest patient is older than 13 years old now, and all patients were able to complete the scoring on their own. The results showed that four patients had normal urinary function and that two had mild urinary dysfunction. Study limitations This study involved only six patients, which was too few to conduct an effective statistical analysis. Further research with a larger sample size is needed. All patients are pre-pubertal. We do not have relevant experience for post-pubertal patients. We speculate that pre-pubertal girls may be more likely to experience this type of trauma. They have insufficient self-protection awareness, thin and weak perineal tissue, and are prone to tearing when subjected to external forces . The lack of urodynamic indicators is due to the fact that some of these patients are now adults who are unable to return to the children’s hospital for urodynamic examination. Additionally, some patients currently have normal urination and menstruation, and they do not have the motivation to undergo urodynamic tests. The most ideal research method for this purpose would be prospective studies. However, urethral injury in girls is relatively rare, making such ideal research difficult to achieve. As the largest tertiary medical center for children in North China, our hospital has treated only 6 female patients with acute severe urethral injuries during the past 20 years. This lower incidence rate in girls aligns with other reports in the literature . Since 2014, our hospital has not treated any new cases of this type, suggesting that increased awareness of traffic safety and enhanced child safety measures may be reducing the number of such injuries. In addition, most patients involved in vehicle accidents are pedestrians hit by motor vehicles rather than passengers. With the development of the social economy and urbanization, more children are traveling inside cars rather than outside, which may also contribute to the decreased incidence of these injuries. The injuries in five (83%) of the six patients were caused by car accidents, and all patients had both pelvic fractures and vaginal injuries. One patient also had a rectal injury. These ratios are close to those reported in the literature . Urethral injuries in girls are often caused by blunt trauma and accompanied by pelvic fractures and injuries to other organs, and the mortality rate of pelvic fractures is reportedly as high as 21.6% . Therefore, emergency rescue and resuscitation are very important for all patients. In the present study, we prioritized correction of hemorrhagic shock and stabilization of the patient’s vital signs as necessary. Five of the six patients presented with a large amount of fresh blood flowing from the perineum and the inability to urinate, which is consistent with common clinical symptoms after acute urethral injury. Notably, however, some patients who present with such injuries may have a more subtle onset of symptoms, and emergency doctors may miss the diagnosis. In a special case of our study, the patient developed perineal bleeding and urinary incontinence after injury. The patient was successfully treated by indwelling catheterization at an external hospital, resulting in clear urine. After transfer to our hospital, due to the unobstructed drainage from the indwelling catheter and clear urine, the emergency doctor focused on managing the vital signs and pelvic fractures, neglecting a thorough examination of the urethra and vagina. Two weeks after injury, the patient still experienced urinary incontinence after the catheter was removed. Following a consultation with a urologist, urethral and vaginal injuries were suspected. Cystoscopy revealed a urethral–vaginal fistula, and a suprapubic cystostomy was performed. One year later, the patient returned to the hospital for a second-stage repair surgery. According to literature reports, traumatic pelvic fractures in female patients may lead to simple urethral contusions as well as partial or complete transverse or longitudinal urethral injuries. Because of the extremely rare occurrence of urethral injuries in girls with traumatic pelvic fractures, which are often accompanied by other visceral injuries, doctors may not be sufficiently vigilant or may be preoccupied with managing other injuries. This type of injury, especially longitudinal urethral injuries, can easily be overlooked initially . Most patients with urethral injuries have tears, where either the normal urethral opening is not visible at the perineum or a catheter cannot be inserted into the bladder through the urethra. This is one of the indicators used by emergency doctors to confirm urethral injuries. Longitudinal urethral tears may be less severe injuries because patients can often urinate on their own and the catheter can pass smoothly through the urethra into the bladder, making these injuries easy to overlook. This case reminds us that any young female patient with combinations of vaginal bleeding, pelvic fractures, or sacral spinal injuries should be carefully evaluated for urethral or vaginal injuries. Even if the catheter can be smoothly inserted into the bladder through the urethra, vigilance cannot be relaxed. Urinary bladder urethrography is an essential examination for the diagnosis of urethral trauma in male patients. However, the diagnostic reliability of imaging for urethral injuries in female patients has always been controversial. According to literature reports, only half of such cases are correctly diagnosed through cystography . Most urethral injuries in female patients are diagnosed through cystoscopy under anesthesia . In the present study, none of the patients underwent urinary bladder urethrography examination. Four patients had severe perineal tears with no visibility of the normal urethral and vaginal openings, making it impossible to insert a urinary catheter. The diagnosis of urethral and vaginal injury was very clear. The urological ultrasound of all six patients showed no bladder rupture and no obvious abnormalities in the upper urinary tract. Additionally, the patient had pelvic fractures and frequent movement was not advisable, so further imaging was not performed. Two other patients were diagnosed with urethrovaginal injury during cystoscopy under general anesthesia. Therefore, we believe that if physical examination and catheterization can confirm the presence of urethral injury in girls, the diagnosis can be made without relying on urinary bladder urethrography and cystoscopy. For patients suspected to have urethral injury during physical examination, urethral endoscopy under general anesthesia can be performed to confirm the diagnosis. For suspicious patients, cystoscopy is recommended if conditions permit. If the perineal tear is serious and the structure is disordered, it is difficult for the doctor to find the broken end of the external urethral orifice at this time. The bladder can be opened through the suprapubic incision first, and the urethra can be observed by cystoscopy through the internal orifice of the urethra. Cystoscopy can also enter the vagina to check vaginal and uterine injuries. Urethral rupture can occur at any level along the urethral course. Cystoscopy serves as a valuable diagnostic tool for precise lesion localization. In our series, Cases No. 4–6 underwent cystoscopic evaluation, revealing distal urethral ruptures that preserved the continence mechanism. Cases No. 1–3 did not receive cystoscopy as the urethral ends were visually identified in the perineum, suggesting no immediate need for further assessment. However, Case No. 1 subsequently developed urinary incontinence. Postoperative cystoscopy revealed no urethral fistula, prompting bladder neck tightening surgery. Given the bladder neck’s critical role in urinary continence and retrospective analysis of this case, we hypothesize that initial injury may have involved the bladder neck, which was not adequately assessed during primary repair.Based on these findings, we recommend routine cystoscopic evaluation for urethral trauma girls when feasible, as it provides essential information regarding:1.Precise lesion localization; 2.Injury extent and severity; 3.Bladder neck integrity assessment.(Fig. ). The management of urethral injuries in girls is relatively complex, and there is currently no clear guideline. Whether to immediately repair the urethra or delay its repair until after life-threatening trauma has been treated and vital signs have been stabilized has long been controversial. Five of the six patients in our study underwent urethral and vaginal repair surgery within 7 days after injury, and one patient also underwent rectal repair surgery. Our experience highlights the following three points. First, if the patient’s condition permits, emergency urethral and vaginal repair surgery can be safely performed. If the patient’s hemodynamics are stable, emergency urethral and vaginal repair surgery can be considered. If necessary, reduction and fixation of pelvic fractures should be completed in advance. This can restore partially twisted anatomical structures, making urethral surgery easier and helping to prevent serious sequelae in the lower urinary and reproductive tracts . If the patient’s vital signs are unstable and the fracture is accompanied by severe clinical symptoms or related injuries, immediate suture repair is extremely difficult. Therefore, performing a pubic cystostomy and secondary urethral reconstruction is ideal in such cases. Second, urgency urethrovaginal repair surgery is associated with certain difficulties and requires experienced pediatric urologists. In our study, five patients underwent emergency repair surgeries for the urethra and vagina performed by surgeons who had worked in pediatric urology for more than 15 years. They had treated multiple cases of old urethral trauma in girls and urethral trauma in boys. They were very familiar with the pelvic anatomy and surgical approach and had rich experience in urethral anastomosis. Third, emergency repair surgery is beneficial for detecting concurrent injuries. In the present study, one patient was found to have a tear in the anterior wall of the rectum during surgery, and rectal repair was performed simultaneously. Postoperative complications following acute urethral injury repair are generally infrequent and mild. In our series, two cases of wound bleeding and infection resolved promptly with symptomatic management. To enhance wound healing, we employed infrared and semiconductor laser therapy. Two patients undergoing infrared therapy achieved satisfactory recovery without complications, including vaginal stenosis. While existing literature questions the efficacy of far-infrared irradiation for perineal wound healing , we observed reduced wound exudation, potentially attributable to localized thermal effects. Current evidence supports Low-Level Laser Therapy (LLLT) as a safe and effective modality for wound healing enhancement . However, given our limited case series, these findings warrant further clinical investigation to establish treatment efficacy. Among the five patients who underwent urgency repair surgery, two developed Clavien–Dindo grade IIIb complications after surgery, necessitating endoscopic or surgical treatment under general anesthesia. The success rate was 60%, which is lower than that reported in the literature. We considered two reasons for this lower success rate. First, the success rate reported in the literature primarily pertains to adult female patients, who differ from pediatric patients. Second, because of the rarity of acute urethral injuries, our management experience was limited. The current gold standard for diagnosing urinary incontinence is urodynamic examination, which is an invasive examination that is difficult for pediatric patients to tolerate and challenging for parents to accept. Patient-reported outcome scales mainly evaluate patients’ subjective feelings, conscious symptoms, and satisfaction with treatment, providing an effective supplement to clinical objective examinations. In the present study, we selected the two most commonly used questionnaires for patient follow-up: the IIQ-7 and the UDI-6. The IIQ-7 has good structural validity, contains fewer questions, and is convenient for patients to answer. It is currently one of the most widely used urinary incontinence quality-of-life scales in international research on pelvic floor dysfunction. The UDI-6 is used to evaluate the degree of distress caused by lower urinary tract dysfunction and genital prolapse symptoms in female patients. Higher scores are associated with more obvious urinary incontinence symptoms. The UDI-6 is currently the most widely used questionnaire to evaluate the degree of distress caused by urinary incontinence symptoms. Using this scale, urinary dysfunction is classified as normal (0–3), mild (4–7), moderate (8–11), and severe (≥ 12). In the present study, the youngest patient is older than 13 years old now, and all patients were able to complete the scoring on their own. The results showed that four patients had normal urinary function and that two had mild urinary dysfunction. This study involved only six patients, which was too few to conduct an effective statistical analysis. Further research with a larger sample size is needed. All patients are pre-pubertal. We do not have relevant experience for post-pubertal patients. We speculate that pre-pubertal girls may be more likely to experience this type of trauma. They have insufficient self-protection awareness, thin and weak perineal tissue, and are prone to tearing when subjected to external forces . The lack of urodynamic indicators is due to the fact that some of these patients are now adults who are unable to return to the children’s hospital for urodynamic examination. Additionally, some patients currently have normal urination and menstruation, and they do not have the motivation to undergo urodynamic tests. The most ideal research method for this purpose would be prospective studies. However, urethral injury in girls is relatively rare, making such ideal research difficult to achieve. The diagnosis of acute urethral trauma in girls needs doctors’ attention. Girls who present with vaginal bleeding, pelvic fractures, or sacral spinal injuries in combination should be carefully evaluated for the presence of urethral or vaginal injuries. Even if a catheter can be smoothly inserted into the bladder through the urethra, vigilance cannot be relaxed. If the presence of urethral injury can be determined through physical examination and catheterization tests, diagnosis can be made without relying on urinary cystourethrography and cystoscopy. For patients suspected to have urethral injury during physical examination, cystourethral endoscopy under general anesthesia can be performed to confirm the diagnosis. If the patient’s vital signs are stable, urgency urethral and vaginal repair surgery can be performed. This procedure is difficult and requires experienced pediatric urologists. However, the success rate of a single surgery is relatively high and helps in detecting composite injuries.
Hysteroscopy among Patients Attending the Outpatient Department of Gynaecology in a Tertiary Care Centre: A Descriptive Cross-sectional Study
bcfe0da8-ebc6-4fa9-8eb3-3e9ab61ebf36
10231550
Gynaecology[mh]
Hysteroscopy is a diagnostic and operative procedure used worldwide for intrauterine pathology. Gradually over the years, it has become a tool of choice and it is replacing dilatation and curettage in treating conditions like abnormal uterine bleeding. , Hysteroscopy enables visualisation of the endometrial cavity and if possible treatment in the same setting avoiding the need for an invasive procedure. It helps in the complete visualization of endometrium revealing normal and abnormal intrauterine pathology. Even cases of missing Intrauterine Contraceptive devices (IUCD), synechiae removal and infertility can be tackled with ease. The objective of the study was to find out the prevalence of hysteroscopy among gynaecological patients attending the outpatient department of Obstetrics and Gynaecology in a tertiary care centre. A descriptive cross-sectional study was done for a duration of five years from 1 January 2016 to 1 January 2020 among gynaecological patients visiting the outpatient Department of Obstetrics and Gynaecology of Grande International Hospital (GIH), Dhapasi, Kathmandu, Nepal after taking ethical approval from the Institutional Review Committee of GIH (Registration number: 029/2021). All patients visiting outpatients Department of Gynaecology during study period were included in the study. Patients with irregular menstrual cycles and with continuous per vaginal bleeding were excluded. Convenience sampling was used. The sample size was calculated using the following formula: n = Z 2 × p × q e 2 = 1.96 2 × 0.5 × 0.5 0.06 2 = 267 Where, n = minimum required sample size Z = 1.96 at 95% of Confidence Interval (CI) p = prevalence taken as 50% for maximum sample size q= 1-p e = margin of error, 6% The calculated sample size was 267. However, a total of 319 patients were included. Each of the patients were given 400 mcg misoprostol per vaginally four hours prior to the procedure. The surgical instrument used was a rigid storz hysteroscope with 0° or 30° lenses. Cutting loop, coagulating electrode, biopsy forceps and scissors were commonly used. The media used was normal saline. All examinations were performed by a single gynaecologist with specialized training in gynaecological endoscopy experience of more than 15 years. The standard sequencing of visualization of the endometrial cavity was maintained. The order followed was a visualisation of the ectocervix, endocervical canal, walls of the uterus, uterine fundus, Ostia and finally the endometrial characteristics. Hysteroscopic findings were thus documented and video-recorded. As per need, diagnostic and therapeutic management was done. Obtained tissue was sent for histopathological examination. Demographic parameters of the patients like age, parity, abortion, menopause, symptoms, and diagnosis along with hysteroscopic and histopathological findings and procedures performed were retrieved from the electronic database maintained in the hospital record. Care was taken to maintain patient confidentiality. Data were analysed using IBM SPSS Statistics version 22.0. Point estimate and 95% CI were calculated. Among 319 gynaecological patients, hysteroscopy was done in 72 (22.57%) (17.98-27.16, 95% CI) patients. From them, 60 (83.33%) of those patients had endometrial tissue samples sent for histopathology. Among the remaining 12 cases, 5 (41.67%) patients had synechiae release, 3 (25%) had septum resections, 3 (25%) had IUCD removal and 1 (8.33%) had foreign body removal. A total of 33 (45.83%) patients were in the age group of 31-40 . Excessive vaginal bleeding during menstruation was seen in 28 (38.88%) as common presenting complaint followed by inability to conceive 26 (36.11%) . The commonest diagnosis at presentation was subfertility 16 (22.22%) followed by endometrial polyp 15 (20.83%) with both diagnoses together present in 5 (6.94%) . Total of 27 (37.50%) patients underwent polypectomy and 20 (27.78%) underwent endometrial sampling . The hysteroscopy findings of endometrial polyp 31 (43.05%) was confirmed in only 28 (46.67%) cases by histopathological examination . Hysteroscopy enables visualisation of the endometrial cavity and if possible treatment in the same setting. It is a minimally invasive procedure used worldwide. The American College of Obstetricians and Gynecologists and the American Association of Gynecologic Laparoscopists recommend the use of hysteroscopy for the diagnosis and treatment of intrauterine pathology. The recommendation of using 400 mcg per vaginal misoprostol 4 hours prior to the procedure was followed in this study. The present study showed the prevalence of hysteroscopy to be higher than in other studies. This could be because the present study was done in a centre where there were more infertile cases approaching and this centre was among the few in the country where hysteroscopy is performed. In the present study, the most common presenting complaint was excessive vaginal bleeding 38.88% followed by inability to conceive 36.11%. Similarly, in a study done abnormal uterine bleeding 32.5% was the presenting complaint. Endometrial polyp 43.05% was the commonest hysteroscopy finding in the present study similar to the study done in Libya was 53.6%. However, it was contrary to the study done in India where most patients presented with proliferative endometrium 34%. This could be because in their study they had included patients of 40 years and above with abnormal uterine bleeding only, however, we had patients with ages ranging from 20-65 years with various symptoms. There were five cases of uterine synechiae in which adhesiolysis was done after which three cases were able to conceive. A study done in Linkou that carried adhesiolysis in 85 females with Asherman's syndrome had also shown excellent results. Uterine septum resection was done in three cases with conception occurring within six months in two cases. The present study had 13 patients 43% of who conceived postsurgery mostly through in vitro fertilization. A study done in Belgium also showed a higher pregnancy rate of 63% after the removal of endometrial polyp prior to intrauterine insemination. Higher fertility rates thus usually occur in women seeking fertility treatment after hysteroscopic removal of an endometrial polyp, submucosal leiomyoma, uterine septum and intrauterine adhesions. Three cases of missing IUCD and one case of a foreign body were also successfully removed in the present study. No significant complications occurred except in two cases 2.77% of fluid overload was managed with diuretics. The complication rate was also 2% in a study done in Libya however, they were of different types of postoperative haemorrhage and perforation. This could be because of more number of cases in their study. Another reason could be the lack of consistency and experience of the surgeons doing hysteroscopy. Similar to the present study, a study done in Iran also showed complications of fluid overload in only a few cases. The point of confusion in the case of proliferative and secretory endometrium could be because of the history of intake of hormonal pills. In patients with such an intake of pills, the endometrium shows mixed characteristics. In the same endometrial cavity, there could be some areas of proliferative and some areas of secretory endometrium. Also, endometrial polyps and submucosal leiomyoma can present in the background of either secretory or proliferative endometrium leading to false results. The limitation of the study is thus its small sample size. Bigger sample populations with more named patterns on hysteroscopy are required as sometimes one histology may give more than one picture on hysteroscopic analysis. Also since it is a retrospective study, data might have been missed. The prevalence of hysteroscopy among gynaecological patients was higher than in the studies done in similar settings. Hysteroscopy enables visualisation of the endometrial cavity and if possible treatment in the same setting avoiding the need for an invasive procedure.
A multi-modal fusion model with enhanced feature representation for chronic kidney disease progression prediction
ef071ae3-af1d-44bd-912c-c59789f5edad
11801269
Biochemistry[mh]
Over the past 30 years, chronic kidney disease (CKD) as a cause of death has increased and contributes 1.35% of the global burden of disability-adjusted life years lost . Within the context of an aging population, CKD prevalence is increasing and involves 10% of the global population . Clinical treatment decisions for CKD are challenging due to the multiple pathologic types and complexity of the patient profile. Many models have been developed to predict CKD progression , most using data recorded in electronic medical records (EMR). Kidney Failure Risk Equation (KFRE) selected four clinical and biochemical variables to predict the risk of kidney failure . Tangri et al. fused demographic, clinical, and laboratory data to improve prediction accuracy . Several studies have leveraged machine learning algorithms (e.g. random forests, logistic regression, support vector machines) to improve feature complexity and expand the range of variables that can be analyzed . Besides, some studies have used several data modalities other than clinical data. Bienaimé et al. used urine biomarkers to assess CKD progression . Agarwal et al. examined protein biomarkers in blood and urine samples . Looker et al. integrated serum biomarkers and clinical variables to predict rapid progression of CKD in patients with type 2 diabetes . Kondo et al. analyzed spatial proteomics and found that protein expression, cellular phenotypic composition, and microenvironmental structure changed with the progression of diabetic nephropathy . However, in clinical practice, doctors need to integrate a variety of information, such as disease descriptions, examination reports, and pathology images, in order to make decisions. Existing models, which analyze and utilize each modality data independently, do not pay attention to the complementary information between modalities, and the potential of multi-modal data in analyzing the disease progression of patients with CKD has not been realized. Artificial intelligence (AI) algorithms have been shown to effectively analyze big data for diagnosing and treating diseases . In particular, AI-based multi-modal algorithms can discover the internal patterns between different modal data within a short training time, giving solutions to population health problems that were previously thought to be impossible, e.g. Alzheimer’s disease, and cardiovascular disease . Vanguri et al. collected patient-level features from histology, radiology, genomics and biomarkers using both manual and machine learning methods and fed the extracted features together into a modal fusion model . Boehm et al. improved risk stratification of high-grade serous ovarian cancer using radiological features extracted by traditional image methods, annotation of histological images, and binary encoding of clinical data . Chen et al. integrated histology-genomics analysis with patch-level image features and raw molecular features on pan-cancer . But these methods only address the problem of how to fuse the data from each modality together. The representation of the features is fixed before they are fed into the model’s fusion module. Suitable data fusion methods can improve the expressive ability of the model, but if there is a lot of noise in the input features also cannot make effective prediction. Zhou et al. have considered fusion at the feature level, but they need to build the model on a large dataset . There is an objective difficulty in that as the data becomes more modal, the number of samples that can match multiple modalities for use is declining. Therefore, it is essential to explore methods for enhancing feature representativeness while integrating additional modalities, particularly when working with small datasets. In summary, we have found that in the field of medical multi-modal algorithms, high heterogeneity among modalities and instability in the quantity and quality of data are the key reasons limiting model performance. There is a lack of an algorithm that can quickly and efficiently fuse multi-modal data directly on small datasets without complex data preprocessing. In CKD applications, the types of data currently used are few. So, in this paper, we introduce FLEX, a deep learning-based multi-modal data fusion model designed to predict CKD progression. FLEX uniquely integrates pathology images, clinical data, proteomics data, and metabolomics data through advanced fusion learning to create a comprehensive representation of disease progression. The X in FLEX means that FLEX can be flexibly applied in the case of 1 or 2 or 3 or 4 modal data. FLEX innovates the encoding of features to improve the efficiency of feature fusion, making the model optimize both the model structure and feature encoding during the training process, thus enhancing feature representativeness. To the best of our knowledge, FLEX is the first end-to-end multi-modal analytical model in the medical field that simultaneously learns from multiple data sources with enhanced feature representativeness, as illustrated in . We validate the effectiveness of the model through multiple comparisons and ablation experiments. Additionally, we assess the model’s generalizability on an external dataset, confirming its robustness and reliability. We also visualize the modal and feature importance distributions, which are crucial for interpretable analysis of multi-modal models. In summary, our study highlights the significant advantages of using AI to integrate multi-modal data, showcasing FLEX’s ability to accurately predict CKD progression and advance personalized medicine approaches. Overview To address the challenge of integrating multi-source, multi-scale data, we developed FLEX, a pioneering multi-modal interpretable deep learning model for predicting and interpreting CKD progression. FLEX uniquely integrates clinical data (C), proteomic data (P), metabolomic data (M), and digital pathology images (I) to forecast CKD progression over 3 years, as illustrated in . FLEX introduces a feature encoding trainer (FET) specifically developed for tabular form data. FET allows FLEX to simultaneously train and update the encoding of features while training the model parameters end-to-end, thus enhancing the extraction of high-dimensional abstract features . We trained our model on the internal dataset using five-fold cross-validation and reported six performance metrics. To validate the model, we performed ablation analysis with different feature numbers, modal combinations, modal fusion methods, and base models. In particular, we collected an external dataset containing only tabular data to validate model robustness. Upon completing the model construction, we visualized and interpreted the importance of features affecting progress across modalities. We also compared the differences in feature importance across populations using feature aggregation and gradient-based methods . FLEX represents a significant advancement in the CKD field, offering both high clinical relevance and computational efficacy as one of the first end-to-end multi-modal models to integrate diverse data types for improved disease prediction and understanding. Dataset construction We collected 259 CKD patients from May 2013 through November 2021 at Nanfang Hospital as the internal dataset. CKD progression was considered a binary endpoint, and patients were divided into progression and non-progression groups based on their follow-up records according to their CKD progression over three years (‘Progression definition’ section in Supplementary Data). The progression group consisted of 106 patients, while the non-progression group consisted of 153 patients. Each patient had at least one electronic medical record, one urine sample and one kidney biopsy sample . For duplicate data, keeping records belonging to the progression group or those with an earlier sample acquisition time was prioritized. The internal dataset collected data in four modalities: clinical data, proteomics data, metabolomics data, and image data, which is the most informative CKD progression study data. The EMR provided clinical data, including demographic information, past medical history, medication history, laboratory test results, and pathologic diagnosis. Urine samples were used for proteomic sequencing and metabolic analysis to generate proteomics and metabolomics data, respectively. Kidney biopsy samples were used to obtain digital pathology images with 20× or 40× Periodic Acid-silver Methenamine (PASM) staining. The data processing pipeline is shown in and ‘Dataset description’ section in . The external dataset contained 81 patients at Nanfang Hospital from June 2013 through January 2022, with a similar distribution of CKD progression categories as the internal dataset. Three modalities of tabular data were included in the external dataset: clinical data, proteomics data, and metabolomics data, using the same processing methods as in the internal dataset. The characteristics of on the internal dataset and external dataset are summarized in . There is no intersection between the internal dataset and the external dataset, as well as, between the training, test and validation sets of the internal dataset. Multi-modal fusion improves the performance of CKD progression prediction We first evaluated the performance of FLEX on the internal dataset using five-fold cross-validation. FLEX achieved a mean area under the receiver operating characteristic (AUC) of 0.887 when integrating data across the four modalities C, P, M, and I . To further explore the impact of different data modality combinations, we maintained the model architecture and data splitting while performing experiments with various modality combinations. The AUC distributions for these experiments are shown in (see for full performance demonstration). Four-modality fusion(I-C-P-M)‘s mean AUC exceeded unimodal I, C, P, and M by, respectively, 0.204, 0.082, 0.092, and 0.056. We observed that the model’s performance improves as more data modalities are integrated. Specifically, in the comparison of mean AUC with single-modal performance as the baseline, we find that the image model obtained the most improvement by superimposing other modal data (i.e. C, P, M, C-P, C-M, P-M, C-P-M). We attribute this improvement to the substantial differences in input forms and feature extraction methods between image data and other modalities, as well as the enhancement provided by tabular data in the model’s ability to interpret images. In contrast, when we added one modality’s data to any combination of the other modality’s data, we found that adding proteomics data provided the greatest performance improvement, possibly because proteomics features are more independent of the other modalities, which creates feature complementarities and leads performance improvements. Overall, clinical data, image data, proteomics, and metabolomics data, represent individual, tissue, and molecular scale features, respectively. Despite the high heterogeneity in data structure and content across these modalities, FLEX can integrate multimodal data and investigates underlying associations to enhance the accuracy of predicting CKD progression. In order to further explore the performance of FLEX under different subgroups, we have statistically analyzed the results of the best trained models on the test set according to gender (male, female), pathological diagnosis (IgA nephropathy [IgAN], not IgAN), age (grouped by median), and estimated glomerular filtration rate (eGFR) grading, respectively. The mean AUC values were as follows: male 0.897 versus female 0.865, younger 0.884 versus older 0.898, IgAN 0.843 versus non-IgAN 0.920, and low eGFR 0.866 versus median eGFR 0.860 versus high eGFR 0.897 . No significant differences were observed within any subgroup, highlighting the robustness of our model. We compared FLEX, FLEX using only clinical data (FLEX-C), and the clinically common KFRE model . FLEX and FLEX-C had mean AUCs that improved compared to KFRE by 0.160 and 0.082 . Among age, sex, urinary albumin to creatinine ratio (UACR), and eGFR used in KFRE, eGFR was also used in FLEX-C, age and sex were removed due to weak feature expressiveness (‘Ablation experiments’ section in Supplementary Data), and UACR was replaced by the strongly correlated but more discriminatory urinary protein-to-creatinine ratio (UPCR) instead (‘Tabular datasets’ section in Supplementary Data). Thus, the clinical features used by FLEX-C are more representative and achieve higher performance. During the selection of the optimal model architecture, we compared the effectiveness of different modal fusion methods in capturing inter-modal relationships (‘Ablation studies’ section in Supplementary Data). We evaluated four methods—summation, multiplication, gating, and concatenation—and obtained mean AUCs of 0.869, 0.562, 0.840, and 0.887, respectively . The lower mean AUC for the multiplication method is attributed to the fact that partial zero values in each modality’s representation vector result in ineffective information in the final multimodal representation. We also analyzed the performance of different image base models (‘Ablation studies’ section in Supplementary Data). Using ResNet18, ResNet34, ResNet50 and VGG-16 as base models, FLEX obtained mean AUCs of 0.887,0.866,0.876 and 0.873, respectively, suggesting that ResNet18 can extract image features better . During data preprocessing, we also found that in clinical practice, missing data for one modality was common, so we developed the Meta module, which takes three other modalities as inputs and generates a replacement vector for the missing modality (‘Meta module’ section in Supplementary Data). shows that using the Meta module improves the average AUC of each modality by 1% compared to missing that modality. This indicated that FLEX is compatible with missing data and can be more effectively adapted for clinical applications. Impact of FET module on feature representation To understand how the model enhances feature representation, we conducted ablation experiments focusing on the FET. We investigated the steps of the FET processing feature and reported the results in (‘Ablation studies’ section in Supplementary Data). FET is a feature extractor capable of accepting data in tabular form as input from any data source. In this study, we applied FET to clinical data at individual level as well as proteomics data and metabolomics data at the molecular level. Firstly, we investigated the impact of ‘Hash mapping’. We found that processing data using ‘Hash mapping’ possesses a significant performance improvement compared to directly inputting the raw data (two-tailed Mann–Whitney U test P-value = 2.62 × 10 −5 ). This finding validates our hypothesis that direct fusion of different levels of data is affected by noise due to distributional differences. Clinical data are mostly binomial and normal distributions, and proteomics and metabolomics data are mostly long-tailed distributions. ‘Hash mapping’ maps data of different modalities to the same scale and then aggregates the features, which partly preserves the original distribution and reduces the data complexity, thus improving the model generalization. Next, we investigated the impact of ‘Encoding training’. ‘Encoding training’ encodes the one-dimensional features generated by ‘Hash mapping’ into ten-dimensional vectors, and trains the feature encoding with gradient backpropagation and optimizer. The performance of ‘Encoding training’ similarly produced a significant improvement over the ‘Raw data’ (two-tailed Mann–Whitney U test P-value = 1.41 × 10 −5 ). Specifically, ‘Encoding training’ increases the encoding length of the features to compensate the decrease in effective information due to ‘Hash mapping’. By train the encoding, with reference to the idea that the training model adjusts the parameters in order to extract effective data patterns, FET no longer fixes the feature encoding but implicitly learns the interrelationships of the features in the high-dimensional space through gradient computation. Each feature vector is additionally connected to a learnable fully connected layer to optimize feature representation more effectively. As shown in , the feature training approach not only enhances the model learning ability of the tabular data, but also captures the correlation between the image and the tabular data, realizing the end-to-end multi-modal fusion, thereby improving the performance of our model. Interpretability and validation of multimodal prediction results To further validate and interpret our model, we employed the feature aggregation and gradient-based methods to illustrate how each modal data affects the prediction (‘Model interpretability’ section in Supplementary Data). This section shows the reasonability of the features that the model focuses on by analyzing modal importance, image focus areas, and tabular feature importance. On the modal contribution evaluation, we used the gradient-based Integrated Gradients method . This method calculates the impact of the representative features in each of the four modalities on the final prediction results. The degree of influence is reflected in the numerical values of the calculated results. We analyzed each modality for each patient and counted the distribution on different populations. For the internal dataset, we show the modal importance distributions for different populations in the progression group, non-progression group, and the whole internal dataset in , , and , respectively. Notably, while the order of modal importance was not consistent for each population, the image data were ranked last, which suggests that we can reduce the use of kidney biopsy and then reduce the damage to the patient’s body. Since the image modality had the lowest modal importance, we used the SCDA to understand whether the image model learned effective morphological features. The SCDA method helps us to transform the model feature map into a score of the relevance of each pixel on the image to the outcome. We plot the heat map according to the score, with red representing strong correlation and blue representing weak correlation. The areas of high interest are selected and submitted to the doctors for validation and interpretability analysis. We show the whole-slide images (WSI), heatmaps, and regions of interest (ROI) in the images for the progression and non-progression groups in and , respectively. Interestingly, by comparing the regions of interest of the two groups, we found that the progression group focused more on regions of tubular atrophy and interstitial fibrosis, while the non-progression group focused on normal tissue regions, avoiding the severe lesions such as glomerulosclerosis. This implies that the image data also contributed effective and valuable information. We additionally analyzed the importance of each indicator in the clinical data, proteomic data, and metabolomic data separately. Indicator importance analyses were also performed using the Integrated Gradients approach, where each indicator was treated as a separate feature input. We analyzed the importance of each feature for individual patients, thus comparing differences between features in different populations. Importance scores of features for all tabular data are in , , and . By ranking the features in descending order by the median importance score, we obtained the important features and validated them in the literature, thus enabling the interpretability analysis and result validation of the tabular data. , , and highlight the top ten features in the clinical data. We observed that eight of the top ten features were present in all three populations, although they were in a different order. Five of the eight variables were labeled with a past medical history, including cardiovascular and cerebrovascular diseases (CCVD), chronic obstructive pulmonary diseases (COPD), pulmonary heart disease (PHD), peripheral vascular disease (PVD) and pulmonary circulatory disorders (PCD). Among these, cardiovascular comorbidities have been shown to be an important risk factor in increased risk of death for patients with CKD . COPD, known as a source of systemic inflammation and is similarly associated with the development of cardiovascular disease . The involvement of cardiac, pulmonary, and vascular disease in the past medical history as key clinical features fits with the concept of managing comorbidities of cardiorenal and metabolic diseases in practice . Additionally, the three remaining variables Beta-blocker medication history (BB) , antidiabetic drugs medication history (AD) , and prothrombin activity (PTA) have all been studied in relation to their influence on the progression of CKD. In the complete list of ranked features, we also found that four features: past medical history of other neurological disorders (OND) , past medical history of primary hypertension (PH) , calcium-channel blocker medication history (CCB) , and Cystatin C (CysC) , showed significant variation in ranked position between the progression and non-progression groups. This suggest that these four features also show strong differentiation ability in predicting CKD progression. The complete character naming cross-reference table is in . , , and show the top ten ranked proteomic features. There is an overlap of important features in the progression group and in the whole internal dataset. For example, Q96S96, Q9H4A3, P08709, and Q16610 have been reported to be associated with IgAN , diabetic nephropathy , myocardial infarction in patients with CKD , and clear cell renal cell carcinoma , respectively. FLEX also identifies additional proteins associated with diabetic nephropathy (i.e. P12104 , O95980 , and P28300 ), or other types of kidney disease (i.e. P02788 , P61626 , and Q02985 ). Interestingly, this corresponds to the CKD pathologic diagnostic types that we involved in the construction of the internal dataset. We also found that two proteins O95450 and P30048 associated with renal cell carcinoma have high feature importance scores, which inspired us to migrate findings from other renal fields in CKD-related studies. , , and demonstrate important metabolomic features. Comparing clinical and proteomic data, metabolomic features had the most distinctive gap between the progressive and non-progressive groups. Six of the top ten (C 15 H 10 N 2 O 3 *, C 9 H 5 O 4 , C 20 H 25 NO 4 *, C 20 H 35 N 3 O 3 , C 20 H 25 NO 4 , C 25 H 34 O 3 ) in the progressive group were within the bottom eleven in the non-progressive group. Based on literature research, we believe that C 20 H 25 NO 4 * or C 20 H 25 NO 4 stands in for Cilomilast, which ameliorates tubulointerstitial fibrosis . There were also six of the top ten features in the non-progression group inside the bottom ten in the progression group (C 7 H 13 NO 2 , C 12 H 14 ClN 3 O, C 20 H 10 O 2 , C 10 H 6 O 5 S, C 19 H 26 , C 12 H 14 N 2 O 4 ). C 7 H 13 NO 2 be the proline betaine that reflects the osmoprotective role of the kidney . C 18 H 23 NO 4 is the only metabolite that shows significant importance across both populations. While there is less research on kidney metabolomics compared to proteomics and clinical data, our study indicates that metabolomic features are the most diverse. These findings could potentially lead to new insights and directions for kidney disease research. External validation To further assess the generalization ability of the model, we validated the performance of FLEX on an external dataset. While the internal and external datasets had similar proportions of progression versus non-progression groups, there was a large difference in the characteristic’s statistics. The internal dataset had a relatively homogeneous distribution of eGFR, whereas the external dataset had 80.2% of patients with eGFR<60 ml/min/1.73m 2 . The most common pathological diagnostic category in the internal dataset was IgAN, and in the external dataset it was an unknown category in 69.1% of patients. We took the trained model on the internal dataset and tested it on the external dataset in order of five-fold cross-validation in turn. Both datasets in demonstrate the best performance at the third fold, which may be due to the more reasonable feature distribution of the data. The mean AUCs for the internal and external datasets were 0.875 and 0.872, respectively, which were not significantly different. We also validated important features of the clinical data , proteomics data , and metabolomics data , respectively. The clinical features of AD, DN, PVD and PCD performed consistently with the internal dataset. Among the remaining clinical characteristics, FLEX also focused on the biomarkers blood urea nitrogen and UPCR , which assess kidney function. We still found associations between blood disorders and CKD in the external dataset, such as whether or not there was use of loop diuretics to control blood pressure , and whether or not there was a history of anemia or PH . CCB was the characteristic that differed significantly between the progression group and the non-progression group in the internal dataset. It ranked seventh in the external dataset, confirming our conclusion that it has strong differentiating ability. Proteomic variables Q16610, O95980, and P28300 have been discussed previously. P05451, A6NGN9, and O75629 are related to diabetic kidney disease , membranous nephropathy , and renal fibrosis , respectively. Several proteins, including Q9HCM3, Q86U17, Q8NHJ6, and O60449, have not been extensively studied yet and show considerable variation in their importance across different groups. We believe these proteins could open new avenues for research. In metabolomics, compounds such as C 7 H 13 NO 2 , C 18 H 23 NO 4 , and C 23 H 32 O 3 exhibited consistent significance in our internal dataset. However, other features revealed the same trend observed in the internal dataset: population differences lead to significant variations in the distribution of feature importance. To address the challenge of integrating multi-source, multi-scale data, we developed FLEX, a pioneering multi-modal interpretable deep learning model for predicting and interpreting CKD progression. FLEX uniquely integrates clinical data (C), proteomic data (P), metabolomic data (M), and digital pathology images (I) to forecast CKD progression over 3 years, as illustrated in . FLEX introduces a feature encoding trainer (FET) specifically developed for tabular form data. FET allows FLEX to simultaneously train and update the encoding of features while training the model parameters end-to-end, thus enhancing the extraction of high-dimensional abstract features . We trained our model on the internal dataset using five-fold cross-validation and reported six performance metrics. To validate the model, we performed ablation analysis with different feature numbers, modal combinations, modal fusion methods, and base models. In particular, we collected an external dataset containing only tabular data to validate model robustness. Upon completing the model construction, we visualized and interpreted the importance of features affecting progress across modalities. We also compared the differences in feature importance across populations using feature aggregation and gradient-based methods . FLEX represents a significant advancement in the CKD field, offering both high clinical relevance and computational efficacy as one of the first end-to-end multi-modal models to integrate diverse data types for improved disease prediction and understanding. We collected 259 CKD patients from May 2013 through November 2021 at Nanfang Hospital as the internal dataset. CKD progression was considered a binary endpoint, and patients were divided into progression and non-progression groups based on their follow-up records according to their CKD progression over three years (‘Progression definition’ section in Supplementary Data). The progression group consisted of 106 patients, while the non-progression group consisted of 153 patients. Each patient had at least one electronic medical record, one urine sample and one kidney biopsy sample . For duplicate data, keeping records belonging to the progression group or those with an earlier sample acquisition time was prioritized. The internal dataset collected data in four modalities: clinical data, proteomics data, metabolomics data, and image data, which is the most informative CKD progression study data. The EMR provided clinical data, including demographic information, past medical history, medication history, laboratory test results, and pathologic diagnosis. Urine samples were used for proteomic sequencing and metabolic analysis to generate proteomics and metabolomics data, respectively. Kidney biopsy samples were used to obtain digital pathology images with 20× or 40× Periodic Acid-silver Methenamine (PASM) staining. The data processing pipeline is shown in and ‘Dataset description’ section in . The external dataset contained 81 patients at Nanfang Hospital from June 2013 through January 2022, with a similar distribution of CKD progression categories as the internal dataset. Three modalities of tabular data were included in the external dataset: clinical data, proteomics data, and metabolomics data, using the same processing methods as in the internal dataset. The characteristics of on the internal dataset and external dataset are summarized in . There is no intersection between the internal dataset and the external dataset, as well as, between the training, test and validation sets of the internal dataset. We first evaluated the performance of FLEX on the internal dataset using five-fold cross-validation. FLEX achieved a mean area under the receiver operating characteristic (AUC) of 0.887 when integrating data across the four modalities C, P, M, and I . To further explore the impact of different data modality combinations, we maintained the model architecture and data splitting while performing experiments with various modality combinations. The AUC distributions for these experiments are shown in (see for full performance demonstration). Four-modality fusion(I-C-P-M)‘s mean AUC exceeded unimodal I, C, P, and M by, respectively, 0.204, 0.082, 0.092, and 0.056. We observed that the model’s performance improves as more data modalities are integrated. Specifically, in the comparison of mean AUC with single-modal performance as the baseline, we find that the image model obtained the most improvement by superimposing other modal data (i.e. C, P, M, C-P, C-M, P-M, C-P-M). We attribute this improvement to the substantial differences in input forms and feature extraction methods between image data and other modalities, as well as the enhancement provided by tabular data in the model’s ability to interpret images. In contrast, when we added one modality’s data to any combination of the other modality’s data, we found that adding proteomics data provided the greatest performance improvement, possibly because proteomics features are more independent of the other modalities, which creates feature complementarities and leads performance improvements. Overall, clinical data, image data, proteomics, and metabolomics data, represent individual, tissue, and molecular scale features, respectively. Despite the high heterogeneity in data structure and content across these modalities, FLEX can integrate multimodal data and investigates underlying associations to enhance the accuracy of predicting CKD progression. In order to further explore the performance of FLEX under different subgroups, we have statistically analyzed the results of the best trained models on the test set according to gender (male, female), pathological diagnosis (IgA nephropathy [IgAN], not IgAN), age (grouped by median), and estimated glomerular filtration rate (eGFR) grading, respectively. The mean AUC values were as follows: male 0.897 versus female 0.865, younger 0.884 versus older 0.898, IgAN 0.843 versus non-IgAN 0.920, and low eGFR 0.866 versus median eGFR 0.860 versus high eGFR 0.897 . No significant differences were observed within any subgroup, highlighting the robustness of our model. We compared FLEX, FLEX using only clinical data (FLEX-C), and the clinically common KFRE model . FLEX and FLEX-C had mean AUCs that improved compared to KFRE by 0.160 and 0.082 . Among age, sex, urinary albumin to creatinine ratio (UACR), and eGFR used in KFRE, eGFR was also used in FLEX-C, age and sex were removed due to weak feature expressiveness (‘Ablation experiments’ section in Supplementary Data), and UACR was replaced by the strongly correlated but more discriminatory urinary protein-to-creatinine ratio (UPCR) instead (‘Tabular datasets’ section in Supplementary Data). Thus, the clinical features used by FLEX-C are more representative and achieve higher performance. During the selection of the optimal model architecture, we compared the effectiveness of different modal fusion methods in capturing inter-modal relationships (‘Ablation studies’ section in Supplementary Data). We evaluated four methods—summation, multiplication, gating, and concatenation—and obtained mean AUCs of 0.869, 0.562, 0.840, and 0.887, respectively . The lower mean AUC for the multiplication method is attributed to the fact that partial zero values in each modality’s representation vector result in ineffective information in the final multimodal representation. We also analyzed the performance of different image base models (‘Ablation studies’ section in Supplementary Data). Using ResNet18, ResNet34, ResNet50 and VGG-16 as base models, FLEX obtained mean AUCs of 0.887,0.866,0.876 and 0.873, respectively, suggesting that ResNet18 can extract image features better . During data preprocessing, we also found that in clinical practice, missing data for one modality was common, so we developed the Meta module, which takes three other modalities as inputs and generates a replacement vector for the missing modality (‘Meta module’ section in Supplementary Data). shows that using the Meta module improves the average AUC of each modality by 1% compared to missing that modality. This indicated that FLEX is compatible with missing data and can be more effectively adapted for clinical applications. To understand how the model enhances feature representation, we conducted ablation experiments focusing on the FET. We investigated the steps of the FET processing feature and reported the results in (‘Ablation studies’ section in Supplementary Data). FET is a feature extractor capable of accepting data in tabular form as input from any data source. In this study, we applied FET to clinical data at individual level as well as proteomics data and metabolomics data at the molecular level. Firstly, we investigated the impact of ‘Hash mapping’. We found that processing data using ‘Hash mapping’ possesses a significant performance improvement compared to directly inputting the raw data (two-tailed Mann–Whitney U test P-value = 2.62 × 10 −5 ). This finding validates our hypothesis that direct fusion of different levels of data is affected by noise due to distributional differences. Clinical data are mostly binomial and normal distributions, and proteomics and metabolomics data are mostly long-tailed distributions. ‘Hash mapping’ maps data of different modalities to the same scale and then aggregates the features, which partly preserves the original distribution and reduces the data complexity, thus improving the model generalization. Next, we investigated the impact of ‘Encoding training’. ‘Encoding training’ encodes the one-dimensional features generated by ‘Hash mapping’ into ten-dimensional vectors, and trains the feature encoding with gradient backpropagation and optimizer. The performance of ‘Encoding training’ similarly produced a significant improvement over the ‘Raw data’ (two-tailed Mann–Whitney U test P-value = 1.41 × 10 −5 ). Specifically, ‘Encoding training’ increases the encoding length of the features to compensate the decrease in effective information due to ‘Hash mapping’. By train the encoding, with reference to the idea that the training model adjusts the parameters in order to extract effective data patterns, FET no longer fixes the feature encoding but implicitly learns the interrelationships of the features in the high-dimensional space through gradient computation. Each feature vector is additionally connected to a learnable fully connected layer to optimize feature representation more effectively. As shown in , the feature training approach not only enhances the model learning ability of the tabular data, but also captures the correlation between the image and the tabular data, realizing the end-to-end multi-modal fusion, thereby improving the performance of our model. To further validate and interpret our model, we employed the feature aggregation and gradient-based methods to illustrate how each modal data affects the prediction (‘Model interpretability’ section in Supplementary Data). This section shows the reasonability of the features that the model focuses on by analyzing modal importance, image focus areas, and tabular feature importance. On the modal contribution evaluation, we used the gradient-based Integrated Gradients method . This method calculates the impact of the representative features in each of the four modalities on the final prediction results. The degree of influence is reflected in the numerical values of the calculated results. We analyzed each modality for each patient and counted the distribution on different populations. For the internal dataset, we show the modal importance distributions for different populations in the progression group, non-progression group, and the whole internal dataset in , , and , respectively. Notably, while the order of modal importance was not consistent for each population, the image data were ranked last, which suggests that we can reduce the use of kidney biopsy and then reduce the damage to the patient’s body. Since the image modality had the lowest modal importance, we used the SCDA to understand whether the image model learned effective morphological features. The SCDA method helps us to transform the model feature map into a score of the relevance of each pixel on the image to the outcome. We plot the heat map according to the score, with red representing strong correlation and blue representing weak correlation. The areas of high interest are selected and submitted to the doctors for validation and interpretability analysis. We show the whole-slide images (WSI), heatmaps, and regions of interest (ROI) in the images for the progression and non-progression groups in and , respectively. Interestingly, by comparing the regions of interest of the two groups, we found that the progression group focused more on regions of tubular atrophy and interstitial fibrosis, while the non-progression group focused on normal tissue regions, avoiding the severe lesions such as glomerulosclerosis. This implies that the image data also contributed effective and valuable information. We additionally analyzed the importance of each indicator in the clinical data, proteomic data, and metabolomic data separately. Indicator importance analyses were also performed using the Integrated Gradients approach, where each indicator was treated as a separate feature input. We analyzed the importance of each feature for individual patients, thus comparing differences between features in different populations. Importance scores of features for all tabular data are in , , and . By ranking the features in descending order by the median importance score, we obtained the important features and validated them in the literature, thus enabling the interpretability analysis and result validation of the tabular data. , , and highlight the top ten features in the clinical data. We observed that eight of the top ten features were present in all three populations, although they were in a different order. Five of the eight variables were labeled with a past medical history, including cardiovascular and cerebrovascular diseases (CCVD), chronic obstructive pulmonary diseases (COPD), pulmonary heart disease (PHD), peripheral vascular disease (PVD) and pulmonary circulatory disorders (PCD). Among these, cardiovascular comorbidities have been shown to be an important risk factor in increased risk of death for patients with CKD . COPD, known as a source of systemic inflammation and is similarly associated with the development of cardiovascular disease . The involvement of cardiac, pulmonary, and vascular disease in the past medical history as key clinical features fits with the concept of managing comorbidities of cardiorenal and metabolic diseases in practice . Additionally, the three remaining variables Beta-blocker medication history (BB) , antidiabetic drugs medication history (AD) , and prothrombin activity (PTA) have all been studied in relation to their influence on the progression of CKD. In the complete list of ranked features, we also found that four features: past medical history of other neurological disorders (OND) , past medical history of primary hypertension (PH) , calcium-channel blocker medication history (CCB) , and Cystatin C (CysC) , showed significant variation in ranked position between the progression and non-progression groups. This suggest that these four features also show strong differentiation ability in predicting CKD progression. The complete character naming cross-reference table is in . , , and show the top ten ranked proteomic features. There is an overlap of important features in the progression group and in the whole internal dataset. For example, Q96S96, Q9H4A3, P08709, and Q16610 have been reported to be associated with IgAN , diabetic nephropathy , myocardial infarction in patients with CKD , and clear cell renal cell carcinoma , respectively. FLEX also identifies additional proteins associated with diabetic nephropathy (i.e. P12104 , O95980 , and P28300 ), or other types of kidney disease (i.e. P02788 , P61626 , and Q02985 ). Interestingly, this corresponds to the CKD pathologic diagnostic types that we involved in the construction of the internal dataset. We also found that two proteins O95450 and P30048 associated with renal cell carcinoma have high feature importance scores, which inspired us to migrate findings from other renal fields in CKD-related studies. , , and demonstrate important metabolomic features. Comparing clinical and proteomic data, metabolomic features had the most distinctive gap between the progressive and non-progressive groups. Six of the top ten (C 15 H 10 N 2 O 3 *, C 9 H 5 O 4 , C 20 H 25 NO 4 *, C 20 H 35 N 3 O 3 , C 20 H 25 NO 4 , C 25 H 34 O 3 ) in the progressive group were within the bottom eleven in the non-progressive group. Based on literature research, we believe that C 20 H 25 NO 4 * or C 20 H 25 NO 4 stands in for Cilomilast, which ameliorates tubulointerstitial fibrosis . There were also six of the top ten features in the non-progression group inside the bottom ten in the progression group (C 7 H 13 NO 2 , C 12 H 14 ClN 3 O, C 20 H 10 O 2 , C 10 H 6 O 5 S, C 19 H 26 , C 12 H 14 N 2 O 4 ). C 7 H 13 NO 2 be the proline betaine that reflects the osmoprotective role of the kidney . C 18 H 23 NO 4 is the only metabolite that shows significant importance across both populations. While there is less research on kidney metabolomics compared to proteomics and clinical data, our study indicates that metabolomic features are the most diverse. These findings could potentially lead to new insights and directions for kidney disease research. To further assess the generalization ability of the model, we validated the performance of FLEX on an external dataset. While the internal and external datasets had similar proportions of progression versus non-progression groups, there was a large difference in the characteristic’s statistics. The internal dataset had a relatively homogeneous distribution of eGFR, whereas the external dataset had 80.2% of patients with eGFR<60 ml/min/1.73m 2 . The most common pathological diagnostic category in the internal dataset was IgAN, and in the external dataset it was an unknown category in 69.1% of patients. We took the trained model on the internal dataset and tested it on the external dataset in order of five-fold cross-validation in turn. Both datasets in demonstrate the best performance at the third fold, which may be due to the more reasonable feature distribution of the data. The mean AUCs for the internal and external datasets were 0.875 and 0.872, respectively, which were not significantly different. We also validated important features of the clinical data , proteomics data , and metabolomics data , respectively. The clinical features of AD, DN, PVD and PCD performed consistently with the internal dataset. Among the remaining clinical characteristics, FLEX also focused on the biomarkers blood urea nitrogen and UPCR , which assess kidney function. We still found associations between blood disorders and CKD in the external dataset, such as whether or not there was use of loop diuretics to control blood pressure , and whether or not there was a history of anemia or PH . CCB was the characteristic that differed significantly between the progression group and the non-progression group in the internal dataset. It ranked seventh in the external dataset, confirming our conclusion that it has strong differentiating ability. Proteomic variables Q16610, O95980, and P28300 have been discussed previously. P05451, A6NGN9, and O75629 are related to diabetic kidney disease , membranous nephropathy , and renal fibrosis , respectively. Several proteins, including Q9HCM3, Q86U17, Q8NHJ6, and O60449, have not been extensively studied yet and show considerable variation in their importance across different groups. We believe these proteins could open new avenues for research. In metabolomics, compounds such as C 7 H 13 NO 2 , C 18 H 23 NO 4 , and C 23 H 32 O 3 exhibited consistent significance in our internal dataset. However, other features revealed the same trend observed in the internal dataset: population differences lead to significant variations in the distribution of feature importance. The fusion computation of multimodal data in the medical field is an important research topic. Medical data come from a variety of individuals, measurement means, and application environments. Sources of data include but are not limited to clinical data, mobile health monitoring data, and demographic data. The difficulties caused by structural differences in data from different modalities lead to challenges in applying multi-modal algorithms to clinical practice. Traditional approaches divide the progression prediction problem into multiple components, but hinder learning of inter-modal relationships and make it difficult to optimize. In this study, we present FLEX, a pioneering interpretable model designed for predicting CKD progression using multimodal data. As the first artificial intelligence model that integrates clinical, proteomic, metabolomic, and imaging data by enhancing feature representativeness, FLEX tackles the complex challenge of fusing data from diverse sources and scales. Its advanced architecture demonstrates exceptional performance in predicting CKD progression, highlighting its robustness and innovation. FLEX not only advances the state of the art in CKD prediction but also sets a new benchmark for handling multimodal data in clinical applications. Our study shows that the prediction performance of multi-modal fusion exceeds that of unimodal prediction and the common clinical application methods. The performance tends to increase with the number of modalities. Particularly, the performance improvement from modal fusion is significant when there is large disparity in data feature extraction methods or referent information content between modalities. In the comparison experiments, the performance of FLEX is not significantly different under various groupings, demonstrating the model’s stability. For tabular data, we have specifically developed the FET architecture. FET allows the model to learn feature encoding during modality fusion. Under the effect of gradient backpropagation, the feature encoding not only represents the information of individual features, but also contains the dependencies between the features. This feature training approach breaks the data format, model structure, and modal interaction constraints imposed by traditional early, intermediate, and late fusion approaches. Such dependencies and interactions are also reflected in the performance enhancement brought to the image modality by the fusion of FET with the image model. It is also indicated that FET can mine complementary semantic information thus assisting the image to find effective features. In addition, the ‘Hash mapping’ step of FET improves the model’s inclusiveness to different data distributions, resulting in application to a wider range of scenarios. Images always play a less important role in clinical progress analysis, which is also reflected in our interpretability analysis. We validate the reasonableness of the model’s focus on features with known knowledge in feature importance analysis, and also provide new directions for the study of biomarkers. In verifying the generalizability of FLEX, we compared the performance on an external dataset. The differences in dataset characteristics reflect clinical applications in real situations. Our model still demonstrates reliable capabilities. Although we have validated the model from multiple perspectives, there are still some limitations to be aware of. From a clinical application perspective, we have developed the Meta module to cope with the problem of single modal missingness, but there are still cases of two or three modal missingness that have not yet been addressed. In terms of interpretability, although we have analyzed the distribution of feature importance within each modality, we do not know how a single feature can trigger a change in the model’s attention locations outside the modality. In conclusion, this study validates that multimodal fusion is more effective than unimodal learning and underscores the importance of enhancing feature representativeness. FLEX, with its capabilities in feature learning and fusion, effectively complements its role in feature extraction, integration, and analysis of multimodal data. Looking ahead, expanding the application of these models to other diseases and engaging in collaborative validation studies will further demonstrate their versatility and effectiveness. These efforts will drive progress in early patient treatment planning and the timely allocation of medical resources, ultimately improving patient outcomes. Summary In this paper, we present FLEX, an artificial intelligence multi-modal data fusion algorithm based on enhancing feature representativeness. FLEX can fuse clinical data, proteomic data, metabolomic data, and pathology images in the application scenarios of CKD progression prediction, and demonstrates superior performance. In addition, after comprehensive ablation experiments and interpretability analysis, FLEX demonstrates the stability and reasonableness of the prediction results. FLEX can be expanded to more data sources and application scenarios in the future, contributing to the promotion of the wide application of medical resources. Key Points For the first time, we introduce the concept of enhancing feature representativeness into the development of medical multi-modal algorithms with FLEX. By aggregating features and optimizing encoding, FLEX effectively reduces distribution noise caused by modal heterogeneity while enriching feature information. During the algorithm experiments, we collected and organized clinical data, proteomic data, metabolomic data, and pathology images. FLEX is the first to use multi-source, multi-modal data on the problem of predicting chronic kidney disease progression. FLEX developed a new deep learning model structure for table-structured data called FET. FET allows for simultaneous training of the model structure and feature encoding, improving the model’s ability to capture interactions between different patterns and features in high-dimensional spaces. We validate the reliability of the model through detailed comparative experiments and feature importance analysis on internal and external datasets. The results showed that FLEX is robust, rational and generalizable. In this paper, we present FLEX, an artificial intelligence multi-modal data fusion algorithm based on enhancing feature representativeness. FLEX can fuse clinical data, proteomic data, metabolomic data, and pathology images in the application scenarios of CKD progression prediction, and demonstrates superior performance. In addition, after comprehensive ablation experiments and interpretability analysis, FLEX demonstrates the stability and reasonableness of the prediction results. FLEX can be expanded to more data sources and application scenarios in the future, contributing to the promotion of the wide application of medical resources. Key Points For the first time, we introduce the concept of enhancing feature representativeness into the development of medical multi-modal algorithms with FLEX. By aggregating features and optimizing encoding, FLEX effectively reduces distribution noise caused by modal heterogeneity while enriching feature information. During the algorithm experiments, we collected and organized clinical data, proteomic data, metabolomic data, and pathology images. FLEX is the first to use multi-source, multi-modal data on the problem of predicting chronic kidney disease progression. FLEX developed a new deep learning model structure for table-structured data called FET. FET allows for simultaneous training of the model structure and feature encoding, improving the model’s ability to capture interactions between different patterns and features in high-dimensional spaces. We validate the reliability of the model through detailed comparative experiments and feature importance analysis on internal and external datasets. The results showed that FLEX is robust, rational and generalizable. For the first time, we introduce the concept of enhancing feature representativeness into the development of medical multi-modal algorithms with FLEX. By aggregating features and optimizing encoding, FLEX effectively reduces distribution noise caused by modal heterogeneity while enriching feature information. During the algorithm experiments, we collected and organized clinical data, proteomic data, metabolomic data, and pathology images. FLEX is the first to use multi-source, multi-modal data on the problem of predicting chronic kidney disease progression. FLEX developed a new deep learning model structure for table-structured data called FET. FET allows for simultaneous training of the model structure and feature encoding, improving the model’s ability to capture interactions between different patterns and features in high-dimensional spaces. We validate the reliability of the model through detailed comparative experiments and feature importance analysis on internal and external datasets. The results showed that FLEX is robust, rational and generalizable. Supplementary_Data-proof_bbaf003 Supplementary_Tables-proof_bbaf003
The Wnt/β-catenin signaling pathway has a healing ability for periapical periodontitis
c6bbd0f4-382c-4bfc-bba0-d31654280dd8
8490427
Anatomy[mh]
Periapical periodontitis is an inflammatory disease of periapical tissues caused by an infection in the root canal system due to inadequate root canal treatment. The invasion of microbes from the root canal system into the periapical tissues induces host immune responses , . During the host defense reaction, the host’s cells produce various inflammatory and anti-inflammatory molecules, and some inflammatory molecules induce osteoclast differentiation . Then, bone homeostasis is disrupted in case of periapical periodontitis, which promotes increased rates of bone resorption, resulting in the formation of periapical lesions that are observed as radiolucent areas . Several disease-related genes have been identified for various multifactorial diseases using single nucleotide polymorphisms (SNPs) in recent years. In case–control studies, some researchers reported that IL-1β polymorphism is associated with periapical periodontitis – . Meanwhile, although several previous SNP studies reported inflammatory cytokines related to periapical periodontitis, only a few reports described other factors , . In the field of bone metabolism, studies reported that the SNP located in exon 18 of the low-density lipoprotein receptor-related protein 5 (LRP5, Wnt coreceptor) gene, caused due to an amino acid change (3989C>T, A1330V), is associated with osteoporosis among patients in Japan, Europe, America, and South America – . Therefore, in this study, we focused on the SNP of LRP5 related to bone metabolism. The Wnt/β-catenin signaling pathway plays a significant role in the maintenance of bone homeostasis , . Wnt proteins bind to their receptor Frizzled and the coreceptor LRP 5/6, suppressing the phosphorylation of β-catenin using glycogen synthase kinase-3β (GSK-3β). The stabilized β-catenin accumulates in the cytoplasm. Accumulation and nuclear translocation of β-catenin enable the association with the transcription factor LEF-1/TCF and activate various target genes (canonical pathway). It has been reported that Li + can activate the Wnt/β-catenin signaling pathway by inhibiting GSK-3β activity . Experimental studies also reported that lithium chloride (LiCl) induces hard tissue formation in vivo and in vitro , . Conversely, there are also the report describing that the Wnt/β-catenin signaling pathway regulates the differentiation of T cells and B cells . For example, in one study, the inhibition of the Wnt canonical pathway blocked the transition from double-negative T cells to double-positive T cells . In another study, LEF-1-deficient mice exhibited defects in pro-B-cell proliferation and survival both in vitro and in vitro . Periapical periodontitis is caused by the host’s defense against microbial invasion and alveolar bone resorption by disrupting bone homeostasis. Hence, we focused on the role of the Wnt/β-catenin signaling pathway in both immunity and bone homeostasis. We hypothesized that the A1330V variant of LRP5 might be associated with apical periodontitis. In the present study, we selected individuals with periapical lesions measuring > 3 mm in diameter as the case group. We performed a genetic polymorphism analysis for these individuals and a control group (individuals with no apical lesion). We found that the A1330V variant of LRP5 was associated with periapical periodontitis. To confirm the Wnt/β-catenin pathway’s role in periapical periodontitis development, we practiced an in vivo murine periapical periodontitis model. Although the administration of the Wnt/β-catenin pathway inhibitor enlarged the apical lesion, applying LiCl into the root canals accelerated periapical periodontitis healing. Our findings demonstrated that the Wnt/β-catenin signaling pathway regulates periapical periodontitis development. We propose a novel therapeutic strategy for this dental lesion. LRP5 SNP is related to periapical periodontitis The mean age of patients was 48.2 ± 20.3 years in the case group and 59.8 ± 17.4 years in the control group. The case group consisted of 19 males and 31 females, and the control group consisted of 12 males and 18 females. The mean diameter of the periapical lesion was 6.0 ± 2.7 mm. Table shows the results of genotyping of LRP5 SNP (A1330V) for the case and control groups. The frequency of C/T was statistically significant between the groups (case group vs. control group, P = 0.00396). Inhibition of the Wnt/β-catenin signaling pathway spreads the periapical lesion The lower left first molar pulp was exposed, and the inhibitor of the Wnt/β-catenin signaling pathway (IWR-1) was administered once a day after the exposure (Fig. ). The periapical lesion volume was analyzed by micro-CT, showing that the periapical lesion volume in the IWR-1 group was significantly larger than that in the vehicle group at 4 weeks (vehicle group: 1.46 ± 0.01 mm 3 , IWR-1 group: 1.65 ± 0.33 mm 3 ) (Fig. A). To confirm the suppression of the Wnt/β-catenin signaling pathway, we evaluated the expression of Axin2 by in situ hybridization. The expression of Axin2 was decreased in the IWR-1 group compared to that in the vehicle group after administration of IWR-1 (Fig. B). H–E staining results revealed the presence of heavy hematoxylin-concentrated cells around the root and the alveolar bone (Fig. ). We observed the expression of Runx2 and Col1a1 on the surface of the alveolar bone around the periapical lesion in both groups, which was decreased in the IWR-1 group compared to that in the vehicle group (Fig. , Supplementary Fig. ). Activation of the Wnt/β-catenin signaling pathway reduced the periapical lesion volume LiCl is a chemical known to activate the Wnt/β-catenin signaling pathway . Hence, we applied it to the root canal for 4 weeks after exposure, followed by histological analysis and micro-CT analysis (Fig. ). The periapical lesion volume in the LiCl application group was significantly smaller than that in the control group at 4 weeks after root canal treatment (control group: 3.44 ± 0.30 mm 3 , LiCl application group: 2.32 ± 0.30 mm 3 ) (Fig. A). To confirm the activation of the Wnt/β-catenin signaling pathway, we evaluated the expression of Axin2 by in situ hybridization. We observed that the LiCl application group showed a higher expression level of Axin 2 than that in the control group (Fig. B). The results of H–E staining of fibroblasts and angiogenesis appeared in both groups (Fig. ). In the control group, the expression areas of Runx2 and Col1a1 were observed only in the bone surface surrounding the periapical lesion (Fig. ). Conversely, in the LiCl application group, the expression areas of Runx2 and Col1a1 were observed not only in the bone surface surrounding the periapical lesion but also in the periapical lesion (Fig. ). There was no difference in CD3-positive T cells between the two groups (Fig. ). In contrast, many CD45R-positive B cells were observed in the LiCl application group compared to that in the control group (Fig. ). The mean age of patients was 48.2 ± 20.3 years in the case group and 59.8 ± 17.4 years in the control group. The case group consisted of 19 males and 31 females, and the control group consisted of 12 males and 18 females. The mean diameter of the periapical lesion was 6.0 ± 2.7 mm. Table shows the results of genotyping of LRP5 SNP (A1330V) for the case and control groups. The frequency of C/T was statistically significant between the groups (case group vs. control group, P = 0.00396). The lower left first molar pulp was exposed, and the inhibitor of the Wnt/β-catenin signaling pathway (IWR-1) was administered once a day after the exposure (Fig. ). The periapical lesion volume was analyzed by micro-CT, showing that the periapical lesion volume in the IWR-1 group was significantly larger than that in the vehicle group at 4 weeks (vehicle group: 1.46 ± 0.01 mm 3 , IWR-1 group: 1.65 ± 0.33 mm 3 ) (Fig. A). To confirm the suppression of the Wnt/β-catenin signaling pathway, we evaluated the expression of Axin2 by in situ hybridization. The expression of Axin2 was decreased in the IWR-1 group compared to that in the vehicle group after administration of IWR-1 (Fig. B). H–E staining results revealed the presence of heavy hematoxylin-concentrated cells around the root and the alveolar bone (Fig. ). We observed the expression of Runx2 and Col1a1 on the surface of the alveolar bone around the periapical lesion in both groups, which was decreased in the IWR-1 group compared to that in the vehicle group (Fig. , Supplementary Fig. ). LiCl is a chemical known to activate the Wnt/β-catenin signaling pathway . Hence, we applied it to the root canal for 4 weeks after exposure, followed by histological analysis and micro-CT analysis (Fig. ). The periapical lesion volume in the LiCl application group was significantly smaller than that in the control group at 4 weeks after root canal treatment (control group: 3.44 ± 0.30 mm 3 , LiCl application group: 2.32 ± 0.30 mm 3 ) (Fig. A). To confirm the activation of the Wnt/β-catenin signaling pathway, we evaluated the expression of Axin2 by in situ hybridization. We observed that the LiCl application group showed a higher expression level of Axin 2 than that in the control group (Fig. B). The results of H–E staining of fibroblasts and angiogenesis appeared in both groups (Fig. ). In the control group, the expression areas of Runx2 and Col1a1 were observed only in the bone surface surrounding the periapical lesion (Fig. ). Conversely, in the LiCl application group, the expression areas of Runx2 and Col1a1 were observed not only in the bone surface surrounding the periapical lesion but also in the periapical lesion (Fig. ). There was no difference in CD3-positive T cells between the two groups (Fig. ). In contrast, many CD45R-positive B cells were observed in the LiCl application group compared to that in the control group (Fig. ). In clinical cases, some patients have no radiolucent area in the periapical lesion despite the apparent occurrence of bacterial infection in the root canal. Conversely, there are also some patients who show a radiolucent area in the periapical lesion despite tight root canal filling. Based on these various clinical cases, we speculated that some factors other than bacterial infection are involved in the onset of periapical periodontitis. Periapical periodontitis is caused by the immune response to foreign factors, such as bacterial invasions, resulting in the subsequent absorption of the alveolar bone by osteoclasts, induced by the production of inflammatory cytokines during the immune response. Thus, periapical periodontitis is caused by bacterial infection factors and host factors such as immune responses and bone metabolism. Recently, focusing on this host fact, several disease-related genes have been identified using SNPs for various multifactorial diseases. Therefore, we decided to analyze the genes related to periapical periodontitis. We chose to extract a candidate gene related to immune response and bone metabolism, i.e., LRP5 , a gene involved in the bone metabolism system – . We found that the distribution of LRP5 SNP (rs3736228) was significantly biased in hetero types (Table ). Researchers reported that A1330V is associated with T cell factor/lymphoid enhancer factor family (Tcf/Lef) activity in an in vitro study . Therefore, in this study, we successfully identified a new SNP associated with the onset of periapical periodontitis. To address the mechanism and factors related to periapical periodontitis, we used a recently-developed murine periapical periodontitis model , . Recently, some factors related to periapical periodontitis have been clarified , . From the result of our SNP analysis, we try to clarify the role of the Wnt/β-catenin signaling pathway for periapical periodontitis. First, we analyzed the volume of the periapical lesion when the Wnt/β-catenin signaling pathway was suppressed. IWR-1, a low-molecular-weight compound, can penetrate the cell membrane easily and effectively inhibits cellular signaling pathways . IWR-1 stabilizes the complex of Axin2, APC, Ck1, and Gsk3β and then promotes the degradation of β-catenin. In the present study, an enlarged volume of the periapical lesion was observed in the IWR-1 group compared to that in the vehicle group (Fig. A). Therefore, we performed in situ hybridization to verify whether the enlarged periapical lesion volume in the IWR-1 group was caused by the inhibition of the Wnt/β-catenin signaling pathway by IWR-1. We found that Axin2 expression was suppressed in the IWR-1 group (Fig. B). Furthermore, the expression levels of Col1a1 and Runx2 , osteoblast differentiation markers, were decreased in the IWR-1 group (Fig. , Supplementary Fig. ). There was no change in osteoclast differentiation (Supplementary Fig. ). These results demonstrated that the inhibition of the Wnt/β-catenin signaling pathway by the administration of IWR-1 suppressed the differentiation of osteoblasts and then promoted the enlarged volume of the periapical lesion. The inhibition of the Wnt/β-catenin signaling pathway resulted in the enlargement of the periapical lesion. According to the IWR-1 experiment results, we speculated that the activation of the Wnt/β-catenin signaling pathway reduced the volume of the periapical lesion volume. We next explored the relationship between the activation of the Wnt/β-catenin signaling pathway and the volume of the periapical lesion. It is known that LiCl activates the Wnt canonical pathway. LiCl also inhibits the activity of GSK3β, induces the accumulation of β-catenin in the cytoplasm, and activates transcription by promoting the intranuclear translocation of β-catenin. Therefore, we examined whether LiCl exhibits a healing ability for periapical periodontitis using a murine periapical periodontitis model , . The periapical lesion volume in the LiCl application group was significantly reduced after 4 weeks of LiCl application (Fig. A). Next, we performed in situ hybridization to verify whether the reduced periapical lesion volume observed in the LiCl application group was caused by the activation of the Wnt/β-catenin signaling pathway by LiCl. Our results showed that the tissue sections at 24 h after LiCl application demonstrated high expression of Axin2 (Fig. B). This result suggested that LiCl activates the Wnt/β-catenin signaling pathway in the periapical lesion. In addition, LiCl application resulted in a higher expression of Col1a1 and Runx2 than that in the control group (Fig. , Supplementary Fig. ). These results demonstrated that the activation of the Wnt/β-catenin signaling pathway by LiCl induces the differentiation of osteoblasts and promotes the healing of periapical periodontitis. Because the Wnt canonical pathway regulates the differentiation of immune cells, we analyzed the effect of LiCl application on the differentiation of immune cells. Although no difference in the number of CD3-positive cells between the LiCl application group and the control group (Fig. ) existed, the number of CD45R-positive cells in the LiCl application group was higher (Fig. ). CD45R is expressed from pro-B cells to mature B cells, and a previous study showed that the Wnt/β-catenin signaling pathway positively regulates the proliferation ability of B cells . This is consistent with the result of the increase in the number of B cells in the periapical lesion of the LiCl application group. Although LiCl reportedly maintains T cells in an undifferentiated state , no change occurred in CD3-positive cells. This discrepancy might be caused due to the different application methods, i.e., the local application into the root canal in this study and the systemic administration in the previous study . In summary, LiCl promotes the healing of periapical periodontitis by inducing bone formation and immune responses. Although conventional root canal treatment agents such as calcium hydroxide primarily target the disinfection of bacteria in the root canal, LiCl targets the Wnt/β-catenin signaling pathway by regulating bone metabolism and immune response. Therefore, our results suggest that LiCl could be used as a bioactive next-generation root canal treatment agent. Sample population and genotyping This study was approved by the research Ethics Review Committee of Osaka University (#450), Osaka, Japan and the Ethics Review Committee of Osaka University Dental Hospital (#H25-E22). All participants signed an informed consent and provided a sample as a source of genomic DNA. Participants whose radiographic records showed periapical lesions measuring > 3 mm in diameter were assigned to the case group (n = 50). Participants who presented with no periapical lesions after root canal treatment were assigned to the control group (n = 30). Buccal mucosa was collected by wiping with a sterile swab. The tip of the swab was immersed in a lysis buffer [100 mM Tris–HCl (pH 8.5), 5 mM EDTA, 0.2% SDS, 200 mM NaCl, 100 μg proteinase K/ml] and reacted at 55 °C for 24 h. After the reaction, genomic DNA was prepared by isopropanol precipitation and ethanol precipitation. Genomic DNA was diluted in 100 μl TE solution and stored at − 80  °C. PCR was performed for LRP5 (dbSNP ID: rs3736228) gene amplification using TaqMan GTXpressTM Master Mix and TaqMan SNP Genotyping Assays (Applied Biosystems, California, USA). All methods were performed following the relevant guidelines and regulations set forth by the Declaration of Helsinki. Informed written consent was obtained from all participants. Murine periapical periodontitis model The study was approved by the research Ethics Committee of the Osaka University, Osaka, Japan, and all experiments were performed according to the guidelines related to animal care (AD-26-011-0) and compliant with ARRIVE guidelines ( http://www.nc3rs.org.uk/page.asp?id=1357 ). C57BL/6 J mice (8-week-old) were intraperitoneally anesthetized with Domitor (0.3 mg/kg) (Nippon Zenyaku Kogyo Co., Fukushima, Japan), Dolmicam (4 mg/kg) (Astellas Pharma Inc., Tokyo, Japan), and Betferar (5 mg/kg) (Meiji Seika Pharma, Tokyo, Japan). The pulp chamber of the lower first molar was accessed with a # 1/4 round bar (Dentsply, Ballaigues, Switzerland) equipped with an electric engine (VIVA MATE G 5, NSK, Tochigi, Japan). The root canals were instrumented with the # 08K file (Dentsply) under the operating microscope and left exposed to the oral cavity. Mice were monitored daily for clinical signs of abnormal posture, lack of grooming, weight loss exceeding 20% of body weight, and anorexia. Though the presence of any of these findings was considered as an endpoint and mice would be euthanized using a CO 2 chamber, we did not observe any of these findings in the mice used for experiments. IWR-1 administration The solutions (0.04 ml) adjusted as follows were administered into the tail vein once a day from the day of pulp exposure (Fig. ). The solutions were adjusted as follows: control solution: DMSO (Wako Pure Chemical Industries, Osaka, Japan) was diluted in phosphate-buffered saline (PBS) (Final concentration: 5%). IWR-1 (Sigma-Aldrich, Missouri, USA) solution (IWR-1: 2.5 μmol/kg) was diluted in the control solution. At 4 weeks after the first administration, the periapical lesion volume was measured (each group, n = 4). LiCl application At 4 weeks after pulp exposure, root canal cleaning was performed using a # 10 K file (Dentsply). LiCl was ground into the small size powder within 100 μm. After root canal cleaning, LiCl (Wako Pure Chemical Industries, Ltd.) was applied into the root canal (0.025 g per each root canal as a powder) with the slightly moistened # 10 K file (Dentsply) under the operating microscope. Root canals of the control group did not contain anything. The pulp chamber was closed with the bonding material Clear fill bond SE ONE ® (Kuraray Noritake Dental, Tokyo, Japan), and a composite resin (MI flow: GC, Tokyo, Japan). Micro-CT measurement The volume of the periapical lesion was measured by micro-computed tomography (CT) (R_mCT 2: Science Mechatronics, Tokyo, Japan) performed on the lower first molar. The photographing conditions were set as follows: tube voltage 90 kV, tube current 160 μA, and slice width 5 μm. The obtained images were analyzed using SimpleViewer software (Science Mechatronics). Based on the method described by Kalatizis-Sousa et al. and, Yoneda et al. , the periapical lesion volume was calculated using the bone morphometry software (TRI 3D-BON: RATOC, Osaka, Japan). The lesion volume was defined as the periapical transmission image volume and compared between experimental groups as previously reported . The Student’s t test evaluated the statistically significant differences in the lesion volume (α = 0.05). Sample preparation for histological analysis After subjecting the mice to the experiments described above, they were reflux-fixed in 4% paraformaldehyde (PFA) solution. The mandibles were collected, immersed, and fixed in 4% PFA solution for 24 h, followed by demineralization with 10% EDTA solution for 2 weeks. After demineralization, the samples were dehydrated with an ascending ethanol series and embedded in paraffin. Then, thin slices with a thickness of 9 μm were prepared. Hematoxylin–eosin staining Paraffin sections were deparaffinized, washed with water, and reacted with Mayer’s hematoxylin solution (Muto, Osaka, Japan) for 7 min. Then, the sections were washed with running water for 20 min and stained with eosin solution (Merck, Darmstadt, Germany) for 5 min, dehydrated with ethanol and decolorized, penetrated with xylene, and sealed with 50% glycerol/PBS. The apical lesion and the surrounding bone were observed under an optical microscope (Axioskop 2 plus; Carl Zeiss, Aalen, Germany). In situ hybridization Paraffin sections were deparaffinized, washed with 0.01 M PBS, fixed with 4% PFA for 10 min, and washed again with PBS. The sections were reacted with 1 μg/ml protease K (Takara Bio, Shiga, Japan) for 5 min and then post-fixed with 4% PFA. This was followed by acetylation with 0.1 M triethanolamine containing 0.25% acetic anhydride and washing with 0.01 M PBS. Prehybridization was carried out at 55 °C for 1 h, and hybridization was carried out overnight at 70 °C using digoxigenin-labeled RNA probes, Axin2 ; addgene #21277, NM_015732, nt1774-2787, Col1a1 ; M_007742, nt29553415, Runx2 ; AF010284, nt922_1746. After hybridization, the sections were placed in fivefold concentration sodium citrate solution (5-SSC) for 20 min. They were reacted with 0.2-SSC at 70 °C for 20 min and left to stand for 5 min in 0.2-SSC in maleate buffer (MBA) for 5 min. Then, blocking was performed for 2 h with a blocking solution containing 5% goat serum (Vector Laboratories, California, USA). After reacting with alkaline phosphatase (AP)-labeled anti-digoxigenin antibody (1:5000) (Roche, Basel, Switzerland) at 4 °C for 24 h, the sections were washed with MBA supplemented with 0.1% Tween 20, followed by washing with distilled water. The sections were reacted with BM Purple AP (Roche) as a substrate for 6 h at room temperature, after which they were washed with PBS and sealed with 50% glycerol/PBS. The apical lesion and the surrounding bone were observed under an optical microscope. Immunohistochemistry The paraffin sections were deparaffinized, washed with Tris-buffered saline (TBS), reacted at 100 °C for 10 min in a citrate buffer, and allowed to stand at room temperature for 30 min. After blocking for 1 h with 10% goat serum-containing blocking solution (10% goat serum/TBS), each antibody was reacted overnight at room temperature. The concentration of each antibody was 1:100 for CD3 (ab16669: Abcam, Cambridge, United Kingdom) and 1:50 for CD45R (ab64100: Abcam) for the experiments. After reaction with the primary antibody, the sections were washed with TBS and incubated with 1:500 Alexa 488-goat anti-rabbit IgG (Invitrogen, California, USA) or 1:500 Alexa 594-goat anti-rat IgG (Abcam) for 2 h. Then, the sections were washed and reacted with DAPI (4′,6-diamidino-2-phenylindole) (Sigma) for 15 min. After the reaction, they were washed and sealed with 50% glycerol. The apical lesion and the surrounding bone were visible under a fluorescence microscope. Statistical analyses Results are presented as mean ± SD. For SNP analyses, comparisons between groups were performed using Fisher’s exact test (α = 0.05). The Student’s t -test quantified the differences in the periapical lesion volume. This study was approved by the research Ethics Review Committee of Osaka University (#450), Osaka, Japan and the Ethics Review Committee of Osaka University Dental Hospital (#H25-E22). All participants signed an informed consent and provided a sample as a source of genomic DNA. Participants whose radiographic records showed periapical lesions measuring > 3 mm in diameter were assigned to the case group (n = 50). Participants who presented with no periapical lesions after root canal treatment were assigned to the control group (n = 30). Buccal mucosa was collected by wiping with a sterile swab. The tip of the swab was immersed in a lysis buffer [100 mM Tris–HCl (pH 8.5), 5 mM EDTA, 0.2% SDS, 200 mM NaCl, 100 μg proteinase K/ml] and reacted at 55 °C for 24 h. After the reaction, genomic DNA was prepared by isopropanol precipitation and ethanol precipitation. Genomic DNA was diluted in 100 μl TE solution and stored at − 80  °C. PCR was performed for LRP5 (dbSNP ID: rs3736228) gene amplification using TaqMan GTXpressTM Master Mix and TaqMan SNP Genotyping Assays (Applied Biosystems, California, USA). All methods were performed following the relevant guidelines and regulations set forth by the Declaration of Helsinki. Informed written consent was obtained from all participants. The study was approved by the research Ethics Committee of the Osaka University, Osaka, Japan, and all experiments were performed according to the guidelines related to animal care (AD-26-011-0) and compliant with ARRIVE guidelines ( http://www.nc3rs.org.uk/page.asp?id=1357 ). C57BL/6 J mice (8-week-old) were intraperitoneally anesthetized with Domitor (0.3 mg/kg) (Nippon Zenyaku Kogyo Co., Fukushima, Japan), Dolmicam (4 mg/kg) (Astellas Pharma Inc., Tokyo, Japan), and Betferar (5 mg/kg) (Meiji Seika Pharma, Tokyo, Japan). The pulp chamber of the lower first molar was accessed with a # 1/4 round bar (Dentsply, Ballaigues, Switzerland) equipped with an electric engine (VIVA MATE G 5, NSK, Tochigi, Japan). The root canals were instrumented with the # 08K file (Dentsply) under the operating microscope and left exposed to the oral cavity. Mice were monitored daily for clinical signs of abnormal posture, lack of grooming, weight loss exceeding 20% of body weight, and anorexia. Though the presence of any of these findings was considered as an endpoint and mice would be euthanized using a CO 2 chamber, we did not observe any of these findings in the mice used for experiments. The solutions (0.04 ml) adjusted as follows were administered into the tail vein once a day from the day of pulp exposure (Fig. ). The solutions were adjusted as follows: control solution: DMSO (Wako Pure Chemical Industries, Osaka, Japan) was diluted in phosphate-buffered saline (PBS) (Final concentration: 5%). IWR-1 (Sigma-Aldrich, Missouri, USA) solution (IWR-1: 2.5 μmol/kg) was diluted in the control solution. At 4 weeks after the first administration, the periapical lesion volume was measured (each group, n = 4). At 4 weeks after pulp exposure, root canal cleaning was performed using a # 10 K file (Dentsply). LiCl was ground into the small size powder within 100 μm. After root canal cleaning, LiCl (Wako Pure Chemical Industries, Ltd.) was applied into the root canal (0.025 g per each root canal as a powder) with the slightly moistened # 10 K file (Dentsply) under the operating microscope. Root canals of the control group did not contain anything. The pulp chamber was closed with the bonding material Clear fill bond SE ONE ® (Kuraray Noritake Dental, Tokyo, Japan), and a composite resin (MI flow: GC, Tokyo, Japan). The volume of the periapical lesion was measured by micro-computed tomography (CT) (R_mCT 2: Science Mechatronics, Tokyo, Japan) performed on the lower first molar. The photographing conditions were set as follows: tube voltage 90 kV, tube current 160 μA, and slice width 5 μm. The obtained images were analyzed using SimpleViewer software (Science Mechatronics). Based on the method described by Kalatizis-Sousa et al. and, Yoneda et al. , the periapical lesion volume was calculated using the bone morphometry software (TRI 3D-BON: RATOC, Osaka, Japan). The lesion volume was defined as the periapical transmission image volume and compared between experimental groups as previously reported . The Student’s t test evaluated the statistically significant differences in the lesion volume (α = 0.05). After subjecting the mice to the experiments described above, they were reflux-fixed in 4% paraformaldehyde (PFA) solution. The mandibles were collected, immersed, and fixed in 4% PFA solution for 24 h, followed by demineralization with 10% EDTA solution for 2 weeks. After demineralization, the samples were dehydrated with an ascending ethanol series and embedded in paraffin. Then, thin slices with a thickness of 9 μm were prepared. Paraffin sections were deparaffinized, washed with water, and reacted with Mayer’s hematoxylin solution (Muto, Osaka, Japan) for 7 min. Then, the sections were washed with running water for 20 min and stained with eosin solution (Merck, Darmstadt, Germany) for 5 min, dehydrated with ethanol and decolorized, penetrated with xylene, and sealed with 50% glycerol/PBS. The apical lesion and the surrounding bone were observed under an optical microscope (Axioskop 2 plus; Carl Zeiss, Aalen, Germany). Paraffin sections were deparaffinized, washed with 0.01 M PBS, fixed with 4% PFA for 10 min, and washed again with PBS. The sections were reacted with 1 μg/ml protease K (Takara Bio, Shiga, Japan) for 5 min and then post-fixed with 4% PFA. This was followed by acetylation with 0.1 M triethanolamine containing 0.25% acetic anhydride and washing with 0.01 M PBS. Prehybridization was carried out at 55 °C for 1 h, and hybridization was carried out overnight at 70 °C using digoxigenin-labeled RNA probes, Axin2 ; addgene #21277, NM_015732, nt1774-2787, Col1a1 ; M_007742, nt29553415, Runx2 ; AF010284, nt922_1746. After hybridization, the sections were placed in fivefold concentration sodium citrate solution (5-SSC) for 20 min. They were reacted with 0.2-SSC at 70 °C for 20 min and left to stand for 5 min in 0.2-SSC in maleate buffer (MBA) for 5 min. Then, blocking was performed for 2 h with a blocking solution containing 5% goat serum (Vector Laboratories, California, USA). After reacting with alkaline phosphatase (AP)-labeled anti-digoxigenin antibody (1:5000) (Roche, Basel, Switzerland) at 4 °C for 24 h, the sections were washed with MBA supplemented with 0.1% Tween 20, followed by washing with distilled water. The sections were reacted with BM Purple AP (Roche) as a substrate for 6 h at room temperature, after which they were washed with PBS and sealed with 50% glycerol/PBS. The apical lesion and the surrounding bone were observed under an optical microscope. The paraffin sections were deparaffinized, washed with Tris-buffered saline (TBS), reacted at 100 °C for 10 min in a citrate buffer, and allowed to stand at room temperature for 30 min. After blocking for 1 h with 10% goat serum-containing blocking solution (10% goat serum/TBS), each antibody was reacted overnight at room temperature. The concentration of each antibody was 1:100 for CD3 (ab16669: Abcam, Cambridge, United Kingdom) and 1:50 for CD45R (ab64100: Abcam) for the experiments. After reaction with the primary antibody, the sections were washed with TBS and incubated with 1:500 Alexa 488-goat anti-rabbit IgG (Invitrogen, California, USA) or 1:500 Alexa 594-goat anti-rat IgG (Abcam) for 2 h. Then, the sections were washed and reacted with DAPI (4′,6-diamidino-2-phenylindole) (Sigma) for 15 min. After the reaction, they were washed and sealed with 50% glycerol. The apical lesion and the surrounding bone were visible under a fluorescence microscope. Results are presented as mean ± SD. For SNP analyses, comparisons between groups were performed using Fisher’s exact test (α = 0.05). The Student’s t -test quantified the differences in the periapical lesion volume. Supplementary Legends. Supplementary Figure S1. Supplementary Figure S2.
Variability of functional and biodiversity responses to perturbations is predictable and informative
ec5c1d63-92fd-42bd-a5c3-f3a2bd46f48c
11604961
Microbiology[mh]
Describing aggregate properties of ecosystems and predicting their behaviour in the face of perturbations is a major goal of contemporary ecology. If consistent patterns emerged when considering aggregate-level responses, ecologists could aim for data-based predictions and provide clear, practical recommendations , . However, there are many relevant aggregate properties to consider, from diversity metrics to ecosystem functions, that may all respond in different ways to perturbations , . As there is no obvious way to organize this variability, the hopes for general predictions of community-level responses to perturbations can seem slim. The importance and origin of species diversity was a central theme of late 20th century ecology – , which led to a proliferation of metrics to define and measure diversity based on the richness, evenness and rarity of species – . Since then, understanding how species collectively perform a function has become a prominent area of research – , with clear implications for our understanding of concrete issues regarding productivity, carbon sequestration, pollination, or nutrient cycling of natural or engineered ecosystems. In light of rapid anthropogenic global change, there is currently increased focus on understanding how aggregate ecological properties will respond to perturbations such as land-use change, invasive species, climate change and pollution , – . Ecologists are very aware that different aggregate properties, such as diversity metrics or ecosystem functions, describe very different aspects of communities and may thus respond in completely different ways to a given environmental perturbation , , . For instance, the many different diversity metrics employed by ecologists describe different facets of community structure , . If a perturbation caused the extinction of rare species while making the overall abundance distribution of the community more even, species richness would decrease, but a measure of evenness (e.g. Simpson’s index) would increase. Similarly, ecosystem functioning takes many forms, and can be measured in a myriad of ways. Some functions, such as biomass production or respiration, are broad functions: they are performed by most or all species in a community. Other functions, such as the breakdown of specific chemicals or the production of specific enzymes, are narrow in the sense that they require the presence of particular species, or combinations of species, to be performed , . The great variety of ecosystem functions—in what they do, how broad or narrow they are, how species contribute to them, and how they respond to perturbations—has motivated the rapid development of multifunctional ecology where multiple functions are considered at once to more accurately characterize the state of an ecosystem – . In the face of this inherent ecological complexity, what can be learned from the variability of functional and biodiversity responses to perturbations? Here we claim that this variability can be used to explore hidden features of ecosystems and of perturbations. To make this point we analyse data from global change experiments conducted in microbial soil systems (Box and Fig. ). Focusing on three diversity metrics, two broad ecosystem functions, and eight narrow ecosystem functions, we explore patterns of mismatches between functional and diversity responses to global change factors (such as pollution, environmental events or land-use change, all seen here as perturbations). Concretely, we look at the proportion of cases where one aggregate property responds negatively to a perturbation while the other responds positively to it. As expected, we find a great degree of variability in responses to perturbations. This variability, however, is not random, but instead shows a recognizable degree of structure. Aggregate properties that are thought to describe ecosystems in similar ways (e.g. production of beta-xylosidase and production of cellobiohydrolase, enzymes that contribute to carbon cycling) have a lower proportion of mismatches than would be expected by chance (modules of blue squares, Fig. A). On the other hand, diversity metrics and ecosystem functions tend to systematically differ in how they respond to perturbations (dominance of red squares between diversity and ecosystem functions, Fig. A). Our intuitions about how mechanistically similar aggregate properties are (i.e. how we ordered the observations Fig. A) thus provide a useful starting point for understanding ecosystem’s response variability, and we also find that there exist generic diversity-function response patterns. Motivated by the findings of this empirical synthesis, we propose a framework that helps us glean useful, hidden information from the variability of functional and diversity responses to perturbations. To do so, we convert the ecological problem into a simpler geometrical one by representing perturbations as displacement vectors and community aggregate properties as directions in community state-space (the high-dimensional space whose axis reports the biomass of all constituent species). The central ingredient of our framework is a geometrical definition of collinearity between two aggregate properties which quantifies their similarity and predicts whether they will respond to a perturbation in the same way (Fig. ). This prediction assumes a high response diversity at the species level, and depends on how species’ responses to perturbations scale with their biomass. Here, coarse-grained assumptions about population-level responses are used to better understand ecosystem functions. Conversely, we show that with some knowledge of the aggregate properties used to observe the ecological impacts of perturbations, the variability of these observations can be leveraged to gain information about species response diversity and how species’ responses scale with their biomass. Armed with our geometrical framework we then reanalyse the empirical data from microbial soil systems to gain new insights on soil microbial ecosystem functions and how they are being impacted by anthropogenic global change. As well as proposing novel methods for validating and applying our framework to ecological data (outlined in an online tutorial at https://jamesaorr.github.io/community-properties-tutorial ), we more broadly aim to inspire new approaches to studying complex ecological systems that embrace the variability of community-level responses to perturbations, using perturbations as probes to reveal hidden features of ecosystem dynamics and functioning. Box 1 Initial analysis of empirical data To quantify the variability of functional and biodiversity responses to perturbations we analysed a dataset of global change experiments conducted in microbial soil systems . This dataset contained 1235 perturbations from 341 publications. Perturbations included warming, elevated carbon dioxide levels, altered precipitation, nutrient enrichment, land-use change, or combinations of these factors. The effect of each perturbation in a given experiment was quantified using the natural logarithm-transformed response ratio: Box 1 Eq 1 [12pt]{minimal} $${{{}}}= (_{{{}}}}{{X}_{{{}}}})$$ RR = ln X t X c where X t and X c are the means of the treatment and control groups for a given aggregate property. The variances of these effect sizes are also available in the dataset, but we do not require them for this initial analysis as we do not exclude points based on some statistical cutoff. Indeed, following our geometric approach, there is no reason to expect that the proportion of mismatches between two aggregate properties would be different for data points with or without statistically significant results. Each individual perturbation was quantified using multiple aggregate properties covering a wide range of ecosystem functions and measures of diversity. We focused on aggregate properties where all pairs had at least ten observations in the dataset so that the proportion of mismatches between them could be estimated with some robustness. This arbitrary number of observations was chosen to strike a balance between having enough observations to estimate proportions of mismatches reliably and having enough pairs of aggregate properties to see general patterns across broad functions, narrow functions, and diversity metrics. Choosing other cut-offs does not qualitatively change the results (demonstrated in the R markdown at 10.5281/zenodo.13985015). This filtering of the data returned 1015 perturbations that were measured with at least two of thirteen aggregate properties including three measures of diversity (richness, Shannon index, and Chao index), two broad ecosystem functions (biomass and respiration), and eight narrow ecosystem functions subdivided into P-cycling enzymes (phosphatase), N-cycling enzymes ( N -acetyl-beta-glucosaminidase), hydrolytic C-cycling enzymes (beta-xylosidase, cellobiodydrolase, beta-glucosidase, and alpha-glucosidase), and oxidative C-cycling enzymes (peroxidase, phenol oxidase). Details of how the functions were measured (e.g. whether respiration was calculated in the laboratory or the field, or whether enzymes were measured using colourimetric or microplate assays) can be found in Zhou et al. (2020). This list of aggregate properties was sorted a priori based on intuitions about their underlying mechanisms (grouped by diversity metrics, broad functions and narrow functions based on Zhou et al. (2020)) and a heatmap was made to visualize the proportion of mismatches between each pair (Fig. A). If the variability between aggregate properties was just random (i.e. if the heatmap was all white or just showed random distributions of red and blue) there might not be much more to say, but if the heatmap showed some structure there could be useful information to gain from the variability. Indeed, the modularity of the heatmap shows that aggregate properties that are thought to be similar tend to respond to perturbations similarly (e.g. relatively low proportion of mismatches—ranging from 0.16 to 0.28—between measures of diversity). Conversely, groups of aggregate properties that describe different aspects of a community can systematically differ in their responses to perturbations (e.g. an abundance of red between diversity metrics and ecosystem functions, with the proportion of mismatches going as high as 0.73). We will return to these empirical results after we have outlined our geometrical approach for quantifying the notion of similarity between aggregate properties. In fact, we can use our framework to reinterpret these empirical data to gain useful insights into how the perturbations in these experiments impacted these communities and also into how the species in these communities contribute to the different ecosystem functions. Our geometrical arguments—outlined in the Methods and in Fig. —are well supported by our simulation results and can be used to refine the analysis of the empirical data. By simulating perturbation experiments on species-rich communities, we show how mismatches in the observations of two functions can be used to quantify the similarity of those functions and can be used to estimate a notion of response diversity (Fig. ). We also found that mismatches in the observations of a function and a diversity metric can be used to quantify the scaling of perturbations by species biomass (Fig. ). Returning to the empirical data, we applied a validation test (formally described in Supplementary Note ) to show that the data, when grouped by biome, meet the assumptions of our geometrical arguments. We could then quantify (i) the similarity and broadness of empirically measured functions, and (ii) the response diversity and biomass scaling of key global change factors (Fig. ). Mismatches between functions In theory, the proportion of qualitative response mismatches between two linear ecosystem functions directly depends on their collinearity (Eq. ; that is, the angle between their respective directions in phase-space (the high-dimensional space whose axis reports the biomass of all constituent species). This is confirmed by our simulations, whose outcomes are represented in Fig. E. This basic result, however, hinges upon the assumption that perturbations are unbiased at the population level; meaning that approximately half the species show positive responses and half the species show negative responses to any given perturbation. If population-level responses are biased towards positive or negative, the geometrical prediction overestimates the proportion of mismatches (Fig. ). This effect occurs because when perturbation effects on species are mostly negative (or mostly positive), they tend to fall in the areas of phase space where functions will necessarily observe the same responses (top right and bottom left quadrants in Fig. D). Because this systematic overestimation indicates that a key assumption is violated, it informs us about population-level effects of perturbations. We can therefore deduce a link between mismatches in observations at the community level and information on population-level response diversity (Fig. B). Deviations from our predictions reveal a degree of population-level response diversity to the perturbation considered. Mismatches between functions and diversity metrics The probability of mismatches between ecosystem functions and diversity metrics can be predicted by considering the angle between the function and the gradient of the diversity metric (Fig. A). Again, consistency of responses at the population level causes the prediction to overestimate the actual proportion of mismatches. We note, however, that the angle between the direction associated with a positive function and a diversity metric can exceed 90° leading to a systematic bias towards qualitative response mismatches. This intriguing result is connected to a second piece of population-level information: the scaling of perturbations by species biomass (Box Eq ). When the effect of perturbations is larger for more abundant species, function and diversity show qualitatively different responses (only the larger points are above the red line in Fig. A). If a perturbation causes the biomass of abundant species to decrease, total biomass will decrease but a diversity measure related to evenness will increase. If on the other hand, a perturbation causes the biomass of abundant species to increase, total biomass will increase but evenness will decrease. This means that the degree of scaling of species responses to perturbations by their biomass can be predicted based on the observed proportion of mismatches between total biomass and diversity measures (Fig. B). Empirical results The validation test of the geometrical framework (outlined in Supplementary Note ) with the entire Zhou et al. datasets (1235 perturbations tested across a huge diversity of biomes including agricultural systems, tundra, desert, and wetlands) was negative. We found no correlation between actual mismatches between two given functions and predicted mismatches based on the mismatches with other functions, if the latter could be seen as vectors in a given phase space . However, validation tests with data from either grassland systems ( n = 367) or forest systems ( n = 435) provided very conclusive support (strong correlation between predicted and realized mismatches) for the use of our geometrical framework (Fig. A). The fact that the test was inconclusive when pooling all data together should not be surprising, since the notion of unique phase space to position the different systems does not make sense. Only when grouping by biome can this fundamental assumption stand a chance of being a useful approximation (but it could very well have failed as the systems remain very different: unlike simulation experiments, the data does not represent repeated perturbations of the same system). A network depicting the similarity of functions using the grasslands dataset further reinforced that mismatch data coincides with our mechanistic understanding of these ecosystem functions (Fig. B). The only two broad functions in the network, biomass and respiration, are beside each other in the network and the seven narrow functions (production of different enzymes) grouped together as would have been expected a priori . Of the ecosystem functions with enough observations to make estimates of their broadness, respiration was the broadest, followed by net nitrogen mineralization rate, and then by three specific enzymes related either to carbon cycling (phenol oxidase and beta-1,4-glucosidase) or to phosphorus cycling (phosphatase). These quantitative estimates support our basic biological intuitions about these systems: a few species contribute to the production of a specific enzyme, more species are involved in the mineralization of nitrogen, and more species still contribute to whole ecosystem respiration. The estimates of broadness for beta-1,4-glucosidase and phosphatase were almost identical for forests and grasslands but respiration was estimated to be more broad in grasslands than in forests (Fig. C). In Supplementary Note we further show that the estimated broadness of functions based on their mismatches with total biomass can even be used to predict their actual proportion of mismatches. The perturbations in the dataset that had enough observations for us to examine their response diversity and/or biomass scaling were warming, carbon dioxide enrichment, phosphorous addition, nitrogen addition, phosphorus and nitrogen addition combined, nitrogen addition and increased precipitation combined, and four types of land-use change: conversion from native ecosystems to agriculture, to pasture, to plantation or to secondary ecosystems. For grasslands, conversion to secondary ecosystems or the addition of nitrogen or carbon dioxide had relatively low response diversity while warming had relatively high response diversity. For forests, land-use change (particularly conversion to pastures) had relatively low response diversity while warming and the addition of nitrogen had relatively high response diversity (Fig. D). For grasslands, land-use change typically showed strong biomass scaling while nutrient enrichment and warming showed relatively weak biomass scaling. For forests, conversion to pasture had relatively low biomass scaling while conversion to secondary ecosystems had relatively high biomass scaling with the other perturbation types falling in between the two (Fig. E). In general, land-use change perturbations had low response diversity and high biomass scaling indicating that species respond in the same way (presumably negatively) and have absolute changes relative to their biomass (e.g. consistent with a perturbation decreasing 50% of all species). Perturbations like warming and nutrient enrichment, on the other hand, typically had high response diversity and low biomass scaling indicating that some species responded negatively while some responded positively to these perturbations and that absolute responses were not completely proportional to initial biomass. In theory, the proportion of qualitative response mismatches between two linear ecosystem functions directly depends on their collinearity (Eq. ; that is, the angle between their respective directions in phase-space (the high-dimensional space whose axis reports the biomass of all constituent species). This is confirmed by our simulations, whose outcomes are represented in Fig. E. This basic result, however, hinges upon the assumption that perturbations are unbiased at the population level; meaning that approximately half the species show positive responses and half the species show negative responses to any given perturbation. If population-level responses are biased towards positive or negative, the geometrical prediction overestimates the proportion of mismatches (Fig. ). This effect occurs because when perturbation effects on species are mostly negative (or mostly positive), they tend to fall in the areas of phase space where functions will necessarily observe the same responses (top right and bottom left quadrants in Fig. D). Because this systematic overestimation indicates that a key assumption is violated, it informs us about population-level effects of perturbations. We can therefore deduce a link between mismatches in observations at the community level and information on population-level response diversity (Fig. B). Deviations from our predictions reveal a degree of population-level response diversity to the perturbation considered. The probability of mismatches between ecosystem functions and diversity metrics can be predicted by considering the angle between the function and the gradient of the diversity metric (Fig. A). Again, consistency of responses at the population level causes the prediction to overestimate the actual proportion of mismatches. We note, however, that the angle between the direction associated with a positive function and a diversity metric can exceed 90° leading to a systematic bias towards qualitative response mismatches. This intriguing result is connected to a second piece of population-level information: the scaling of perturbations by species biomass (Box Eq ). When the effect of perturbations is larger for more abundant species, function and diversity show qualitatively different responses (only the larger points are above the red line in Fig. A). If a perturbation causes the biomass of abundant species to decrease, total biomass will decrease but a diversity measure related to evenness will increase. If on the other hand, a perturbation causes the biomass of abundant species to increase, total biomass will increase but evenness will decrease. This means that the degree of scaling of species responses to perturbations by their biomass can be predicted based on the observed proportion of mismatches between total biomass and diversity measures (Fig. B). The validation test of the geometrical framework (outlined in Supplementary Note ) with the entire Zhou et al. datasets (1235 perturbations tested across a huge diversity of biomes including agricultural systems, tundra, desert, and wetlands) was negative. We found no correlation between actual mismatches between two given functions and predicted mismatches based on the mismatches with other functions, if the latter could be seen as vectors in a given phase space . However, validation tests with data from either grassland systems ( n = 367) or forest systems ( n = 435) provided very conclusive support (strong correlation between predicted and realized mismatches) for the use of our geometrical framework (Fig. A). The fact that the test was inconclusive when pooling all data together should not be surprising, since the notion of unique phase space to position the different systems does not make sense. Only when grouping by biome can this fundamental assumption stand a chance of being a useful approximation (but it could very well have failed as the systems remain very different: unlike simulation experiments, the data does not represent repeated perturbations of the same system). A network depicting the similarity of functions using the grasslands dataset further reinforced that mismatch data coincides with our mechanistic understanding of these ecosystem functions (Fig. B). The only two broad functions in the network, biomass and respiration, are beside each other in the network and the seven narrow functions (production of different enzymes) grouped together as would have been expected a priori . Of the ecosystem functions with enough observations to make estimates of their broadness, respiration was the broadest, followed by net nitrogen mineralization rate, and then by three specific enzymes related either to carbon cycling (phenol oxidase and beta-1,4-glucosidase) or to phosphorus cycling (phosphatase). These quantitative estimates support our basic biological intuitions about these systems: a few species contribute to the production of a specific enzyme, more species are involved in the mineralization of nitrogen, and more species still contribute to whole ecosystem respiration. The estimates of broadness for beta-1,4-glucosidase and phosphatase were almost identical for forests and grasslands but respiration was estimated to be more broad in grasslands than in forests (Fig. C). In Supplementary Note we further show that the estimated broadness of functions based on their mismatches with total biomass can even be used to predict their actual proportion of mismatches. The perturbations in the dataset that had enough observations for us to examine their response diversity and/or biomass scaling were warming, carbon dioxide enrichment, phosphorous addition, nitrogen addition, phosphorus and nitrogen addition combined, nitrogen addition and increased precipitation combined, and four types of land-use change: conversion from native ecosystems to agriculture, to pasture, to plantation or to secondary ecosystems. For grasslands, conversion to secondary ecosystems or the addition of nitrogen or carbon dioxide had relatively low response diversity while warming had relatively high response diversity. For forests, land-use change (particularly conversion to pastures) had relatively low response diversity while warming and the addition of nitrogen had relatively high response diversity (Fig. D). For grasslands, land-use change typically showed strong biomass scaling while nutrient enrichment and warming showed relatively weak biomass scaling. For forests, conversion to pasture had relatively low biomass scaling while conversion to secondary ecosystems had relatively high biomass scaling with the other perturbation types falling in between the two (Fig. E). In general, land-use change perturbations had low response diversity and high biomass scaling indicating that species respond in the same way (presumably negatively) and have absolute changes relative to their biomass (e.g. consistent with a perturbation decreasing 50% of all species). Perturbations like warming and nutrient enrichment, on the other hand, typically had high response diversity and low biomass scaling indicating that some species responded negatively while some responded positively to these perturbations and that absolute responses were not completely proportional to initial biomass. Variability of results, or “context-dependency”, is pervasive in ecology . While this is partly what makes ecosystems so fascinating to study—indeed there is great interest in the mechanistic underpinning of contrasting responses of diversity and function to perturbations —it could also be viewed as an obstacle to the synthesis of previous results and to the prediction of future impacts. Our research has focused on some of this variability (the variability between the responses of community aggregate properties to a given perturbation) and found that it is predictable and also a rich source of information. Mismatches between the responses of different aggregate properties to a class of perturbations (e.g. land-use change) can give us previously hidden information about the aggregate properties themselves (i.e. similarity and broadness of ecosystem functions) and about how such perturbations impact the species that constitute the community (i.e. response diversity and biomass scaling). Ecological research is typically reductionist, using information about individuals and populations to understand communities and ecosystems . Our work demonstrates the reverse approach by using information about communities to understand population-level responses. In this paper we have reported two analyses of the microbial soil system dataset: (i) an initial, naive synthesis that we used to motivate our work (Fig. ), and (ii) a more detailed analysis informed by our geometrical framework (Fig. ). Our geometrical approach helped to explain some of the interesting patterns in the initial analysis—such as the relatively high levels of mismatches between broad and narrow functions and between functions and diversity—but more importantly, it allowed us to take our biological interpretations further and to extract new information from the data using a novel type of analysis. For instance, we found that perturbations associated with global change vary greatly in their response diversity (Fig. D). Land-use change typically had relatively low response diversity (i.e. most species responded in the same direction), while warming showed relatively high response diversity (i.e. some species increased in abundance while others decreased in abundance). Furthermore, we found that biomass scaling is a prominent feature of anthropogenic perturbations of these ecosystems. The proportion of mismatches between total biomass and Shannon diversity—positively correlated with the biomass scaling exponent (Fig. B)—ranged from ∼0.3 for warming in grassland systems all the way up to ∼0.8 for some land-use change perturbations. In other words, species that initially represent a large proportion of the overall biomass in these microbial systems also represent a large proportion of the variation in biomass caused by global change factors. In the next two sections, we will first outline in more detail the general empirical applications of our proposed framework and we will then discuss the future research directions that our geometrical perspective of aggregate properties could lead to. Empirical applications Although we have shown that variability of community-level responses to perturbations can be predicted, our geometrical framework does not attempt to predict how specific aggregate properties will respond to specific perturbations. Instead, based on the assumption that functions can be seen as directions (which amounts to assuming that per-capita contributions of species to functions are fixed), it can be used to generate null expectations for when aggregate properties should and shouldn’t respond in the same way to a perturbation. From a practical perspective, our framework therefore offers a novel set of methods (demonstrated in the tutorial available at: https://jamesaorr.github.io/community-properties-tutorial/ ) that ecologists can use to study species’ contributions to ecosystem functions and the population-level effects of perturbations. As the central ingredient of our framework is the proportion of mismatches in the observations of different aggregate properties, increasing the volume of data will lead to more robust estimates. Indeed, the size of the points in Fig. C–E indicates the confidence of those estimates. However, there is a trade-off between the volume of data used to quantify proportions of mismatches and the consistency of the underlying systems; the validation test was inconclusive when we pooled data from all biomes but gave very convincing results when we focused on either the grasslands or forests systems. Given the requirements for moderate to high volumes of data, our framework is probably best suited for use in research synthesis, where it can be used to complement traditional tools like meta-analyses. The geometrical view of aggregate properties allows us to use perturbations as probes to better understand how species influence the functioning of ecosystems. We found that the proportion of mismatches between functions can be used to quantify their similarity in terms of which species contribute to them. This was demonstrated by the modularity of the heat map in Fig. A and of the network in Fig. B. This observation is certainly reassuring, as it confirms that mechanistic understandings at the chemical level of microbial functions are consistent with ecosystem-level observations. Furthermore, given that total biomass is by definition the broadest function, we can now use mismatches between a function of interest and total biomass to quantify the broadness of that function (demonstrated by numerical evidence in Fig. E inset and empirical evidence in Fig. C). If an ecologist was interested in a new ecosystem function they could quickly compare it to other functions based on how it responds to perturbations to estimate how broad it was and to identify which species were contributing to it (based on it’s similarity to functions with more information about their species’ contributions). Understanding the links between community composition and functioning has far-reaching implications for many sectors including ecosystem management, agriculture, forestry and medicine – and our approach contributes to recent efforts to study ecosystem functions in their natural context, in contrast to the traditional reductionist approach of using controlled experiments where populations or even organisms are studied in isolation , . Our framework can also be used to study population-level responses to perturbations from the top down by comparing the observations of different functions. Response diversity—the variation between species responses to a perturbation—can be measured in different ways and is a key mechanism underlying ecological stability and the biological insurance hypothesis – . Although the information we can gain using our geometrical approach (i.e. the proportion of species responding positively or negatively—see Fig. ) is a coarse measure of response diversity, it can be accessed by just comparing the observations of different functions (e.g. total biomass and respiration) rather than actually measuring each species’ response. The easiest approach is to take two functions and compare their proportion of mismatches over different perturbations (or different systems or different contexts) to gain a relative measure of response diversity (as we did in Fig. D). However if the collinearity between two functions is known (for well-studied functions, or by using our approximations based on the estimated broadness of the functions), then we can use the deviations from our null expectation to quantify the population-level response diversity (Fig. B). Another useful piece of information that can be gained with our top-down approach is the biomass scaling of a perturbation (i.e. whether the direct effect of a perturbation is proportional to the biomass of each species). This feature of perturbations controls the relative importance of rare or common species in determining the community’s temporal variability (“environmental perturbations” sensu Arnoldi et al., ). Using mismatches between any function and any diversity metric can be used to rank perturbations based on their biomass scaling (Fig. E). Furthermore, the proportion of mismatches between diversity and total biomass is actually a very good proxy for the biomass scaling exponent itself Fig. B). If the responses of each species to the perturbations is available then biomass scaling (and response diversity) can be extracted from the data directly. However, it is very common for measures of diversity to be estimated from data without measuring species-level responses (there are 221 observations in the Zhou et al., dataset where OTU richness is the only measure of diversity). In these cases, biomass scaling cannot be measured directly from the data, but it can be estimated using our framework. Comparing multiple community-level observations—measuring responses of more functions allows for more pairwise comparisons and therefore more detailed insights—allows us to describe these features of perturbations without ever having to collect information directly at the population level, which could therefore be an efficient and cost-effective tool for research synthesis or the analysis of biomonitoring data. Future directions Our work has so far overlooked the temporal dynamics of responses to perturbations. As we only needed to consider the initial and perturbed states of ecosystems for our geometrical approach (perturbations as displacement vectors in Fig. A), we haven’t made the distinction between press and pulse perturbations and we also haven’t considered non-linear responses. However, to consider a community’s trajectory during and after a perturbation, our framework could be applied in future studies to test if response diversity and biomass scaling of perturbations change over time. Practically this would involve comparing the responses of two (or more) aggregate properties to a perturbation over time and checking if there was a change in the proportion of mismatches (e.g. over-replicates in an experimental treatment). For example, if the proportion of mismatches between a set of ecosystem functions was initially very low following a perturbation but then increased over time, this would be consistent with a scenario where most species initially responded negatively to that perturbation but then some species increased in abundance (e.g. due to competitive release). Changes in the proportion of mismatches between diversity and function over time would likewise imply changes in the biomass scaling of a perturbation. It seems likely that this new geometrical perspective could be combined with tools in the ecological stability literature to study dynamic ecological responses to perturbations. In our work, we did not explicitly consider biotic interactions, yet they nonetheless play a role. The state that an ecosystem reaches after a perturbation undoubtedly depends on species interactions, especially if the time scale considered is long enough to allow community dynamics to play out. The classic example is the trophic cascade . If a perturbation directly impacts the top of a food chain (e.g. species invasion), it will in time also affect its base, following the alternating sign pattern characteristic of a cascade. Put in the context of our work, biotic interactions play a role in what we call “features of perturbations” like biomass scaling and response diversity. An exciting future direction would therefore be to seek for recognizable signatures of species interactions in the variations through time of those perturbation features. For instance, we can hypothesise that strong mutualistic interactions would generate increasingly coherent responses as time grows (corresponding to a reduction of response diversity). Furthermore, our framework makes the simplifying assumption that species per capita contributions to functions are fixed, but in reality how a species contributes to a function may be dependent on its interactions with other species (although the fact that our validation test was conclusive implies that this assumption is not a bad approximation). We propose to see our work as a first step of a more general program: using perturbations as “probes”, where ecosystem functions are macroscopic “observables”, to better understand the dynamics of natural ecosystems. Given the generality of our framework, our work touches many areas of contemporary ecology. For multifunctional ecologists, it helps to explain how different functions can respond in different ways to global change . For ecologists interested in multiple perturbations, our work can be used to understand variability in how community-level properties observe the interactions (antagonistic or synergistic) between perturbations . For biodiversity-ecosystem functioning research, the opposing responses of diversity and function to perturbations (which we explained) should be considered when understanding how perturbations influence biodiversity-ecosystem functioning relationships . Our work can be used in disturbance ecology to link studies across disparate systems and may even help to interpret trade-offs between biodiversity and crop yield under different farming practices . When studying complex systems such as ecosystems, it is important to have baseline expectations for their behaviour. We have found that the variability between community-level responses to perturbations does not just limit synthesis and prediction in ecology. Instead, this variability is predictable and can be leveraged to gain useful information about species’ responses to perturbations and species’ contributions to ecosystem functioning. Our work provides a solid platform from which the complexity of community-level responses to anthropogenic global change can be better understood. Although we have shown that variability of community-level responses to perturbations can be predicted, our geometrical framework does not attempt to predict how specific aggregate properties will respond to specific perturbations. Instead, based on the assumption that functions can be seen as directions (which amounts to assuming that per-capita contributions of species to functions are fixed), it can be used to generate null expectations for when aggregate properties should and shouldn’t respond in the same way to a perturbation. From a practical perspective, our framework therefore offers a novel set of methods (demonstrated in the tutorial available at: https://jamesaorr.github.io/community-properties-tutorial/ ) that ecologists can use to study species’ contributions to ecosystem functions and the population-level effects of perturbations. As the central ingredient of our framework is the proportion of mismatches in the observations of different aggregate properties, increasing the volume of data will lead to more robust estimates. Indeed, the size of the points in Fig. C–E indicates the confidence of those estimates. However, there is a trade-off between the volume of data used to quantify proportions of mismatches and the consistency of the underlying systems; the validation test was inconclusive when we pooled data from all biomes but gave very convincing results when we focused on either the grasslands or forests systems. Given the requirements for moderate to high volumes of data, our framework is probably best suited for use in research synthesis, where it can be used to complement traditional tools like meta-analyses. The geometrical view of aggregate properties allows us to use perturbations as probes to better understand how species influence the functioning of ecosystems. We found that the proportion of mismatches between functions can be used to quantify their similarity in terms of which species contribute to them. This was demonstrated by the modularity of the heat map in Fig. A and of the network in Fig. B. This observation is certainly reassuring, as it confirms that mechanistic understandings at the chemical level of microbial functions are consistent with ecosystem-level observations. Furthermore, given that total biomass is by definition the broadest function, we can now use mismatches between a function of interest and total biomass to quantify the broadness of that function (demonstrated by numerical evidence in Fig. E inset and empirical evidence in Fig. C). If an ecologist was interested in a new ecosystem function they could quickly compare it to other functions based on how it responds to perturbations to estimate how broad it was and to identify which species were contributing to it (based on it’s similarity to functions with more information about their species’ contributions). Understanding the links between community composition and functioning has far-reaching implications for many sectors including ecosystem management, agriculture, forestry and medicine – and our approach contributes to recent efforts to study ecosystem functions in their natural context, in contrast to the traditional reductionist approach of using controlled experiments where populations or even organisms are studied in isolation , . Our framework can also be used to study population-level responses to perturbations from the top down by comparing the observations of different functions. Response diversity—the variation between species responses to a perturbation—can be measured in different ways and is a key mechanism underlying ecological stability and the biological insurance hypothesis – . Although the information we can gain using our geometrical approach (i.e. the proportion of species responding positively or negatively—see Fig. ) is a coarse measure of response diversity, it can be accessed by just comparing the observations of different functions (e.g. total biomass and respiration) rather than actually measuring each species’ response. The easiest approach is to take two functions and compare their proportion of mismatches over different perturbations (or different systems or different contexts) to gain a relative measure of response diversity (as we did in Fig. D). However if the collinearity between two functions is known (for well-studied functions, or by using our approximations based on the estimated broadness of the functions), then we can use the deviations from our null expectation to quantify the population-level response diversity (Fig. B). Another useful piece of information that can be gained with our top-down approach is the biomass scaling of a perturbation (i.e. whether the direct effect of a perturbation is proportional to the biomass of each species). This feature of perturbations controls the relative importance of rare or common species in determining the community’s temporal variability (“environmental perturbations” sensu Arnoldi et al., ). Using mismatches between any function and any diversity metric can be used to rank perturbations based on their biomass scaling (Fig. E). Furthermore, the proportion of mismatches between diversity and total biomass is actually a very good proxy for the biomass scaling exponent itself Fig. B). If the responses of each species to the perturbations is available then biomass scaling (and response diversity) can be extracted from the data directly. However, it is very common for measures of diversity to be estimated from data without measuring species-level responses (there are 221 observations in the Zhou et al., dataset where OTU richness is the only measure of diversity). In these cases, biomass scaling cannot be measured directly from the data, but it can be estimated using our framework. Comparing multiple community-level observations—measuring responses of more functions allows for more pairwise comparisons and therefore more detailed insights—allows us to describe these features of perturbations without ever having to collect information directly at the population level, which could therefore be an efficient and cost-effective tool for research synthesis or the analysis of biomonitoring data. Our work has so far overlooked the temporal dynamics of responses to perturbations. As we only needed to consider the initial and perturbed states of ecosystems for our geometrical approach (perturbations as displacement vectors in Fig. A), we haven’t made the distinction between press and pulse perturbations and we also haven’t considered non-linear responses. However, to consider a community’s trajectory during and after a perturbation, our framework could be applied in future studies to test if response diversity and biomass scaling of perturbations change over time. Practically this would involve comparing the responses of two (or more) aggregate properties to a perturbation over time and checking if there was a change in the proportion of mismatches (e.g. over-replicates in an experimental treatment). For example, if the proportion of mismatches between a set of ecosystem functions was initially very low following a perturbation but then increased over time, this would be consistent with a scenario where most species initially responded negatively to that perturbation but then some species increased in abundance (e.g. due to competitive release). Changes in the proportion of mismatches between diversity and function over time would likewise imply changes in the biomass scaling of a perturbation. It seems likely that this new geometrical perspective could be combined with tools in the ecological stability literature to study dynamic ecological responses to perturbations. In our work, we did not explicitly consider biotic interactions, yet they nonetheless play a role. The state that an ecosystem reaches after a perturbation undoubtedly depends on species interactions, especially if the time scale considered is long enough to allow community dynamics to play out. The classic example is the trophic cascade . If a perturbation directly impacts the top of a food chain (e.g. species invasion), it will in time also affect its base, following the alternating sign pattern characteristic of a cascade. Put in the context of our work, biotic interactions play a role in what we call “features of perturbations” like biomass scaling and response diversity. An exciting future direction would therefore be to seek for recognizable signatures of species interactions in the variations through time of those perturbation features. For instance, we can hypothesise that strong mutualistic interactions would generate increasingly coherent responses as time grows (corresponding to a reduction of response diversity). Furthermore, our framework makes the simplifying assumption that species per capita contributions to functions are fixed, but in reality how a species contributes to a function may be dependent on its interactions with other species (although the fact that our validation test was conclusive implies that this assumption is not a bad approximation). We propose to see our work as a first step of a more general program: using perturbations as “probes”, where ecosystem functions are macroscopic “observables”, to better understand the dynamics of natural ecosystems. Given the generality of our framework, our work touches many areas of contemporary ecology. For multifunctional ecologists, it helps to explain how different functions can respond in different ways to global change . For ecologists interested in multiple perturbations, our work can be used to understand variability in how community-level properties observe the interactions (antagonistic or synergistic) between perturbations . For biodiversity-ecosystem functioning research, the opposing responses of diversity and function to perturbations (which we explained) should be considered when understanding how perturbations influence biodiversity-ecosystem functioning relationships . Our work can be used in disturbance ecology to link studies across disparate systems and may even help to interpret trade-offs between biodiversity and crop yield under different farming practices . When studying complex systems such as ecosystems, it is important to have baseline expectations for their behaviour. We have found that the variability between community-level responses to perturbations does not just limit synthesis and prediction in ecology. Instead, this variability is predictable and can be leveraged to gain useful information about species’ responses to perturbations and species’ contributions to ecosystem functioning. Our work provides a solid platform from which the complexity of community-level responses to anthropogenic global change can be better understood. Geometrical approach To understand what can be learned from the variability of aggregate properties’ responses to perturbations, we transpose the ecological problem to a more abstract, but simpler, geometrical setting (described more formally in Box ). First, we consider the effects of perturbations on populations as displacement vectors in the ecosystem’s state-space, where axes report the biomass of all constituent species (Fig. A). This vector is the difference between initial and perturbed states. It encodes the response to the perturbation at the population level at a given time and can be applied to both press perturbations (where the community may be expected to stay at the perturbed state for some time) and pulse perturbations (where the community may be expected to return to the initial state from the perturbed state). We then see ecosystem functions as positive directions in this same state space (Fig. B). Total biomass for example is the sum of all the species’ biomass and its direction lies exactly between all the axes, giving equal weight to all species. Other functions may not be influenced by the biomass of all species equally. In the hypothetical example shown in Fig. B, general decomposition is slightly more sensitive to the biomass of fungi than to the biomass of bacteria, plastic decomposition is primarily carried out by bacteria, and chemical production is primarily carried out by fungi. In general, a positive direction is spanned by a vector of positive values representing the per-capita contribution of each species to the function of interest. Our approach therefore aligns with Grime’s “biomass-ratio hypothesis” where species contributions to ecosystem functions increase with increasing biomass . The “broadest” function, total biomass, is made up entirely of ones. The “narrowest” functions, are made up entirely of zeroes, except on the entry associated with the only contributing species . Next, we combine these two levels of abstraction to model how functions “observe” perturbations. We recenter the state space so that the axes now represent the response of each species, with the origin consequently being the initial state of the community (Fig. C). Projecting the displacement vector (multi-dimensional vector describing species responses to a perturbation) onto the direction of an ecosystem function (one dimensional vector made up of species contributions to the function) gives the “observation” of that function (see blue and red lines coming from perturbed states A and B in Fig. C). For each function, drawing a line through the origin and perpendicular to the direction of the function delineates two zones. One where the projection is negative, and thus the function observes a negative response and the other where the projection is positive and thus the function observes a positive response. If the two directions associated to the two functions are not perfectly collinear, there will be zones of state-space where responses to perturbations will be qualitatively different when observed by one function or the other. These zones are the two symmetrical cones centred on the origin, formed by the delineation lines of the functions, perpendicular to their respective directions (red zones in Fig. D). The larger the angle between two functions, the larger the zones of mismatches. Consequently, if species’ responses were random and unbiased, the probability of finding a qualitative mismatch between two functions is: 1 [12pt]{minimal} $${}=$$ P ( Mismatch ) = θ π where θ is the angle between the two functions measured in radians. This collinearity of functions allows us to quantify their similarity. The similarity between functions, defined in this way, is related to their respective broadness, which quantifies the evenness of species per-capita functional contributions . Indeed, in a community of S species and functions f and g : 2 [12pt]{minimal} $$ { }_{{div}}=}\!D(f)/S\, {}\!D(g)/S}$$ cos θ ≈ cos θ d i v = 2 D ( f ) / S × 2 D ( g ) / S where 1/ S ≤ 2 D ( f )/ S ≤ 1 is the broadness of the function f (same for function g ), defined here as the Gini–Simpson diversity index of the vector of species contributions to the function, and normalized by species richness S . Expression quantifies the intuitive expectation that two broad functions ought to be highly collinear, whereas two narrow functions can be independent (i.e. orthogonal to one another) if they are not performed by the same set of species. There is a straightforward, yet very useful application to this reasoning that we will use in our data analysis: because total biomass is the broadest function by definition (corresponding to a value of 1), we can use the proportion of mismatches P f ,bio between total biomass and a given function f to estimate the latter’s broadness. Indeed, if perturbations are random, we have, for any positive function: 3 [12pt]{minimal} $$ {({P}_{{f},{{bio}}} )}^{2}{=}^{2}D(f)/S$$ cos ( P f , b i o × π ) 2 = 2 D ( f ) / S We illustrate this relationship between broadness and mismatches with total biomass in Fig. E (inset). Our final level of abstraction is the realization that measures of diversity, which are highly non-linear functions of species biomass (in the mathematical sense of a function of variables, not in the sense of ecological functioning), can still be placed into this geometrical setting by considering their (state-dependent) gradients (outlined in more detail in Box ). The gradient of a diversity metric is a state-dependent vector encoding how small variations in each species’ biomass change that diversity metric. The collinearity between diversity metrics and ecosystem functions can therefore be quantified by measuring the angle between the gradient of a diversity metric and the direction of an ecosystem function. Importantly, gradients of diversity metrics span non-positive directions in state space because increasing the biomass of some species (the more abundant ones) decreases diversity. This allows for the angle between diversity metrics and ecosystem functions to exceed 90°. Box 2 Formalizing the variability of observed responses to a perturbation We formalize the process of observing the ecosystem-level impact of a given perturbation, based on aggregate features of functioning or diversity. Our goal is to explain what controls the probability that two scalar observations of the same perturbed ecosystem give opposite results. Here bold symbols denote S —dimensional vectors, where S is the species richness of the community. Let N c be the initial (or control) state of a community: the vector of species biomass prior to the perturbation. Let N p be the perturbed community state. The observed response, quantified via an ecosystem function f ( N ), is Box 2 Eq 1 [12pt]{minimal} $$ f=f({{}}^{p})-f({{}}^{c}).$$ Δ f = f ( N p ) − f ( N c ) . For a linear function, there exists a constant f 0 (because we will consider changes in functioning, and not absolute levels of functioning, this constant will play no role in what follows) and a vector φ —the gradient—such that Box 2 Eq 2 [12pt]{minimal} $$f={f}_{0}+ {{{}}},{{}}$$ f ( N ) = f 0 + φ , N with [12pt]{minimal} $$ , $$ ⋅ , ⋅ the scalar product of vectors. The elements of the gradient vector φ encode the per capita contribution of species to the function. For us it will not matter what those exact contributions are. Only relative species contributions, which determine the direction spanned by the vector φ , are required for our framework. A positive function is such that the elements of the gradient are positive. If we rewrite the response of the function to the perturbation, we get that Box 2 Eq 3 [12pt]{minimal} $$ f= {{{}}},{{{}}}{{}}$$ Δ f = φ , Δ N where Δ N = N p − N c is the vector of population-level responses. For non-linear aggregate properties, such as diversity metrics, the (state dependent) gradient vector can be computed as [12pt]{minimal} $${ }_{i}({{{{}}}}^{c})=_{i}}{| }_{{{{{}}}}^{c}}$$ φ i ( N c ) = ∂ f ∂ N i ∣ N c . In this case, expression (Box 2 Eq 3) will be an approximation, accurate for weak perturbations for which the state-dependent gradient vector is still relevant. Now, for two functions, f , g associated with two directions spanned by the two gradient vectors φ and ϕ , we define their collinearity as the angle 0 ≤ θ < 2 π whose cosine is Box 2 Eq 4 [12pt]{minimal} $$ =}}},{{{}}} }{ {{{}}} {{{}}} }$$ cos θ = φ , ϕ φ ϕ where [12pt]{minimal} $$ $$ ⋅ denotes the Euclidian norm of vectors. A graphical argument (Fig. D) tells us that the fraction of perturbation vectors Δ N that will lead to a mismatch between the observations of f and g is Box 2 Eq 5 [12pt]{minimal} $${}(\,{{}}( f) \, \ {{}}\,( g))=$$ P ( sign ( Δ f ) ≠ sign ( Δ g ) ) = θ π In such cases, one of the functions will observe a positive response, while the other function will observe a negative response. Generically, we can evaluate the cosine of the angle based on a notion of functional broadness. Indeed, given a random choice of positive functions Box 2 Eq 6 [12pt]{minimal} $$}}}},{{{{}}}}}{\| {{{{}}}}\| \| {{{{}}}}\| } _{i}{ }_{i}}{_{i}^{2}{ }_{i}^{2}}}=}{{}_{i}})}^{2}}}{{}_{i}})^{2}}}}=D_{f}}{S}D_{g}}{S}}$$ φ , ϕ ∥ φ ∥ ∥ ϕ ∥ ≈ 1 S ∑ φ i ∑ ϕ i ∑ φ i 2 ∑ ϕ i 2 = 1 S 1 ∑ φ i ∑ φ i 2 1 ∑ ϕ i ∑ ϕ i 2 = 2 D f S 2 D g S where q D denotes Hill’s diversity index. We will call the fraction [12pt]{minimal} $$D_{f}}{S}$$ 2 D f S the broadness of the function f , which is maximal (and equal to one) if all species contribute equally to the function (i.e. total biomass). We can modify the above theory to account for an additional piece of population-level information in the form of a biomass scaling of population-level responses. It is indeed reasonable to expect that more abundant species will, in absolute terms, show a larger response to some types of perturbations (e.g. habitat loss of 50% may decrease biomass of all species by 50%, so the most abundant species will experience the greatest absolute losses). For some scaling exponent α ≥ 0, if we denote Λ the diagonal matrix whose elements are the species biomass prior to the perturbation, we may assume that the perturbation displacement vector takes the form Δ N = Λ α Δ . We then have that Box 2 Eq 7 [12pt]{minimal} $$ f= { }^{ }{{{}}},{{{}}}$$ Δ f = Λ α φ , Δ the relevant angle to consider then becomes Box 2 Eq 8 [12pt]{minimal} $$ { }_{ }=}}},{ }^{2 }{{{}}} }{ { }^{ }{{{}}} { }^{ }{{{}}} }$$ cos θ α = φ , Λ 2 α ϕ Λ α φ Λ α ϕ giving the fraction of rescaled vectors Δ that would lead to a qualitative mismatch. Simulation model for perturbation experiments To test, explore and illustrate the geometrical ideas outlined above, we conducted numerical experiments where ecological communities were perturbed and their responses were observed using different aggregate properties. We did not ask our simulations to have complex, realistic underpinnings. We simply defined a protocol to generate a wide range of initial and perturbed states, and a wide range of aggregate properties (representing ecosystem functions or diversity measures) that we then used to quantify the ecosystem-level impacts of the perturbations. Initial states were vectors N of length S (chosen uniformly between S = 20 and S = 100) whose elements N i are the initial species abundance or biomass. Those were drawn from log-normal distributions with zero mean and standard deviation (uniformly chosen between 1/2 and 2), thus generating a wide range of communities while also mimicking empirical abundance distribution patterns. For each initial state, 500 perturbations were generated as vectors Δ N of length S (perturbed states are N + Δ N ) whose elements Δ N i were generated in the following way. First, for each species, we drew a value x i from a normal distribution with unit standard deviation and mean μ . For a given initial state, μ is a fixed value uniformly chosen between −0.3 and 0.3. It determines the qualitative consistency of population-level responses (more on this below). We then normalized the set of values x i by [12pt]{minimal} $${( {x}_{j}^{2})}^{1/2}$$ ( 1 S ∑ x j 2 ) 1 / 2 , which gave us a set of values y i that we used to define the actual response of species as 4 [12pt]{minimal} $$ {N}_{i}={{{}}} {y}_{i} {N}_{i}^{ }$$ Δ N i = intensity × y i × N i α For a given perturbation, its intensity was drawn uniformly between 0 and 0.1. We also allowed the impacts of perturbations to scale with the initial abundance (or biomass, in this toy model there is no difference) of species. For each perturbation, the biomass scaling exponent ( α ) was uniformly chosen between 0 and 1. When α = 1, the population response to the perturbation is, on average over the community, proportional to the species initial biomass. The other basic population-level feature that we considered is a notion of response diversity (i.e. whether the perturbation impacted most species positively or negatively). As mentioned above, this feature is set by the parameter μ . Indeed, if we define the population-level response consistency as 5 [12pt]{minimal} $${{{}}}=| -_{i} < 0\}}{S}|$$ bias = ∣ 1 2 − # { i , Δ N i < 0 } S ∣ ( # denotes the number of elements in a discrete set, here the set of species whose abundances are reduced by the perturbation); then, the expected fraction of negative population responses in the above expression is Φ(− μ ) where Φ( x ) is the cumulative function of a standard normal distribution. Ecosystem functions, which we used to “observe” the ecosystem-level response to perturbations, were represented by positive directions in an S -dimensional space, spanned by vectors φ whose elements φ i represent species’ per-capita functional contributions. For a given state N , its level of functioning is then f ( N ) = ∑ φ i N i (see Box ). The per-capita contributions φ i were drawn from a log-normal distribution with a standard deviation uniformly chosen between 0 and 1.3. When the standard deviation was small, the functions were broad as the per-capita contributions of each species were similar. When the standard deviation was large, however, the functions were more narrow, with a large variation in the per-capita contributions of each species to the function. Diversity metrics were taken from the family of Hill diversity that define the effective number of species as: 6 [12pt]{minimal} $${}^{q}D={({}_{i=1}^{S}{p}_{i}^{q})}^{1/(1-q)}$$ D q ( N ) = ∑ i = 1 S p i q 1 / ( 1 − q ) where S is richness, p i is the relative abundance (or biomass) of species i and q is the hill number that determines the sensitivity of the diversity index to rare or to abundant species. This general equation encompasses species richness ( q = 0), the Shannon index ( q = 1) and the Gini–Simpson index ( q = 2) , , , . To apply our geometrical framework to diversity observations we considered the directions spanned by their gradients (the vector of partial derivatives [12pt]{minimal} $$^{q}D}{ {N}_{i}}$$ ∂ q D ∂ N i ), evaluated at the initial state, which take the form q φ = ( q φ i ) with 7 [12pt]{minimal} $${}^{q}{ }_{i}=({p}_{i}^{q-1}-{}_{j=1}^{S}{p}_{j}^{q})$$ φ i q = q 1 − q p i q − 1 − ∑ j = 1 S p j q For each perturbation experiment and each pair of aggregate properties f , g —either two positive linear functions, or a diversity metric and a function (for two diversity metrics see Supplementary Note )—we checked the consistency of their responses. That is, we looked at the sign of f ( N + Δ N ) − f ( N ) and compared it to the sign of g ( N + Δ N ) − g ( N ). If they do not coincide, there is a qualitative mismatch between the two ways of observing the ecosystem’s response to the perturbation. For the simulations, 1000 communities (i.e. initial states) were generated and each one experienced 500 different perturbations. For Figs. E and , two ecosystem functions of varying broadness were generated for each community and used to observe the community-level responses to the perturbations. The angle between the directions defined by the functions was calculated, divided by π (Eq. , and plotted against the realised proportion of mismatches over the 500 perturbations, while recording the relative deviation from the prediction. For Fig. E all perturbations were unbiased at the population level, but for Fig. perturbations could vary in their population-level consistency. The angle between each pair of functions was also estimated using only the knowledge of their broadness based on their mismatches with biomass. For Fig. A, total biomass (positive direction whose elements are all 1) and Hill-Simpson ( 2 D ) were used to observe the ecosystem-level responses to the perturbations. The effective angle between total biomass and the state-dependent gradient of the diversity index, based on (Box Eq 8), was calculated, divided by π , and plotted against the actual proportion of mismatches. Detailed analysis of empirical data Equipped with our geometrical framework for understanding the variability of functional and biodiversity responses to perturbations we can return to the empirical data from Box to uncover novel insights. However, before we can use our framework to learn more about species contributions to ecosystem functions and about the structure of perturbations, we can first confirm that viewing functions as directions and equating their mismatches to their collinearity is a valid approach for a given dataset. To do this we can perform a validation test, formally described in Supplementary Note , where we try to predict the mismatches between two functions (i.e. their collinearity) based on the mismatches between all other pairs of functions. Indeed, if we know the respective angles that two chosen directions make with the remaining set of directions, we should be able to estimate, in a specific way , the angle between the chosen pair. This test involves matrix operations that can introduce artefacts into the results, meaning that an inconclusive test does not necessarily invalidate the application of our framework to a given dataset. However, a conclusive test—mismatches between two functions being well predicted by mismatches between all other pairs of functions—is very strong support for the view of aggregate properties as directions in state space and gives a green light for further exploration of the data using our geometrical arguments. To better understand species’ contributions to ecosystem functions we can use the mismatch data (i.e. matrix in Fig. ) to examine both the similarity of functions and their relative broadness. Firstly, the matrix of mismatches can be used as an adjacency matrix for a network that groups functions based on their similarity. A force-directed layout algorithm, such as the Kamada–Kawai path length cost-function , will generate networks where distance corresponds to the similarity of functions. Secondly, we can use total biomass (the broadest ecosystem function by definition) as a baseline to quantify the broadness of other functions. The angle between total biomass and other broad functions will be small so, over many perturbations with unbiased population-level effects, the proportion of mismatches will therefore be low. Narrower functions will have larger angles with biomass, which will result in higher proportions of mismatches (Fig. ). Working in reverse, we can use the proportion of mismatches between some function and total biomass (directly available from the data) to predict the broadness of that function. For a fair estimate of broadness, the proportion of mismatches between the function and biomass should be quantified over a large pool of perturbations that collectively have random effects. Here, we therefore do not consider perturbations of nutrients for ecosystem functions related to that nutrient—these perturbations have systematic effects rather than random effects—and we only consider cases where the proportion of mismatches between a function and biomass is based on at least twenty perturbations including at least five types of global change factors. So far we have used perturbations to gain insights into species contributions to ecosystem functions. However, we can also use the mismatches between functions to gain useful information about the population-level effects of the perturbations themselves. We can compare the proportion of mismatches between two ecosystem functions (e.g. total biomass and respiration) across different perturbations to quantify the relative response diversity of those perturbations. If perturbations have low response diversity (i.e. most species respond in the same direction), then perturbations will be biased in their directions in state-space towards the fully negative or fully positive areas of state-space (bottom left quadrant or top right quadrant of Fig. D, respectively), and would avoid the cones of mismatches for functions with positive directions. We can therefore use the proportion of mismatches for a given pair of functions to rank perturbations based on their response diversity. We can also use mismatch data to ask if a perturbation’s population-level effects are independent of biomass or if more abundant species have larger absolute changes in biomass (i.e. biomass scaling of a perturbation). If a perturbation causes the biomass of abundant species to decrease, total biomass will decrease but diversity will increase. If on the other hand, a perturbation causes the biomass of abundant species to increase, total biomass will increase but diversity will decrease. As such, when perturbations are scaled by biomass, there will be a higher proportion of mismatches between functions and diversity. As a result, we can use the proportion of mismatches between a function and a diversity metric to rank perturbations based on their biomass scaling. Here, we only made estimates for the response diversity or biomass scaling of perturbations if there were at least five shared observations of those perturbations for the relevant pair of aggregate properties in the dataset. A detailed tutorial, aimed at empirical ecologists interested in applying this geometrical framework to their data, is available at https://jamesaorr.github.io/community-properties-tutorial/ . The tutorial contains useful snippets of code and detailed descriptions of all stages of the analysis from (i) data preparation, (ii) validation test, (iii) exploring species contributions to functions, and (iv) exploring the population-level effects of perturbations. Reporting summary Further information on research design is available in the linked to this article. To understand what can be learned from the variability of aggregate properties’ responses to perturbations, we transpose the ecological problem to a more abstract, but simpler, geometrical setting (described more formally in Box ). First, we consider the effects of perturbations on populations as displacement vectors in the ecosystem’s state-space, where axes report the biomass of all constituent species (Fig. A). This vector is the difference between initial and perturbed states. It encodes the response to the perturbation at the population level at a given time and can be applied to both press perturbations (where the community may be expected to stay at the perturbed state for some time) and pulse perturbations (where the community may be expected to return to the initial state from the perturbed state). We then see ecosystem functions as positive directions in this same state space (Fig. B). Total biomass for example is the sum of all the species’ biomass and its direction lies exactly between all the axes, giving equal weight to all species. Other functions may not be influenced by the biomass of all species equally. In the hypothetical example shown in Fig. B, general decomposition is slightly more sensitive to the biomass of fungi than to the biomass of bacteria, plastic decomposition is primarily carried out by bacteria, and chemical production is primarily carried out by fungi. In general, a positive direction is spanned by a vector of positive values representing the per-capita contribution of each species to the function of interest. Our approach therefore aligns with Grime’s “biomass-ratio hypothesis” where species contributions to ecosystem functions increase with increasing biomass . The “broadest” function, total biomass, is made up entirely of ones. The “narrowest” functions, are made up entirely of zeroes, except on the entry associated with the only contributing species . Next, we combine these two levels of abstraction to model how functions “observe” perturbations. We recenter the state space so that the axes now represent the response of each species, with the origin consequently being the initial state of the community (Fig. C). Projecting the displacement vector (multi-dimensional vector describing species responses to a perturbation) onto the direction of an ecosystem function (one dimensional vector made up of species contributions to the function) gives the “observation” of that function (see blue and red lines coming from perturbed states A and B in Fig. C). For each function, drawing a line through the origin and perpendicular to the direction of the function delineates two zones. One where the projection is negative, and thus the function observes a negative response and the other where the projection is positive and thus the function observes a positive response. If the two directions associated to the two functions are not perfectly collinear, there will be zones of state-space where responses to perturbations will be qualitatively different when observed by one function or the other. These zones are the two symmetrical cones centred on the origin, formed by the delineation lines of the functions, perpendicular to their respective directions (red zones in Fig. D). The larger the angle between two functions, the larger the zones of mismatches. Consequently, if species’ responses were random and unbiased, the probability of finding a qualitative mismatch between two functions is: 1 [12pt]{minimal} $${}=$$ P ( Mismatch ) = θ π where θ is the angle between the two functions measured in radians. This collinearity of functions allows us to quantify their similarity. The similarity between functions, defined in this way, is related to their respective broadness, which quantifies the evenness of species per-capita functional contributions . Indeed, in a community of S species and functions f and g : 2 [12pt]{minimal} $$ { }_{{div}}=}\!D(f)/S\, {}\!D(g)/S}$$ cos θ ≈ cos θ d i v = 2 D ( f ) / S × 2 D ( g ) / S where 1/ S ≤ 2 D ( f )/ S ≤ 1 is the broadness of the function f (same for function g ), defined here as the Gini–Simpson diversity index of the vector of species contributions to the function, and normalized by species richness S . Expression quantifies the intuitive expectation that two broad functions ought to be highly collinear, whereas two narrow functions can be independent (i.e. orthogonal to one another) if they are not performed by the same set of species. There is a straightforward, yet very useful application to this reasoning that we will use in our data analysis: because total biomass is the broadest function by definition (corresponding to a value of 1), we can use the proportion of mismatches P f ,bio between total biomass and a given function f to estimate the latter’s broadness. Indeed, if perturbations are random, we have, for any positive function: 3 [12pt]{minimal} $$ {({P}_{{f},{{bio}}} )}^{2}{=}^{2}D(f)/S$$ cos ( P f , b i o × π ) 2 = 2 D ( f ) / S We illustrate this relationship between broadness and mismatches with total biomass in Fig. E (inset). Our final level of abstraction is the realization that measures of diversity, which are highly non-linear functions of species biomass (in the mathematical sense of a function of variables, not in the sense of ecological functioning), can still be placed into this geometrical setting by considering their (state-dependent) gradients (outlined in more detail in Box ). The gradient of a diversity metric is a state-dependent vector encoding how small variations in each species’ biomass change that diversity metric. The collinearity between diversity metrics and ecosystem functions can therefore be quantified by measuring the angle between the gradient of a diversity metric and the direction of an ecosystem function. Importantly, gradients of diversity metrics span non-positive directions in state space because increasing the biomass of some species (the more abundant ones) decreases diversity. This allows for the angle between diversity metrics and ecosystem functions to exceed 90°. To test, explore and illustrate the geometrical ideas outlined above, we conducted numerical experiments where ecological communities were perturbed and their responses were observed using different aggregate properties. We did not ask our simulations to have complex, realistic underpinnings. We simply defined a protocol to generate a wide range of initial and perturbed states, and a wide range of aggregate properties (representing ecosystem functions or diversity measures) that we then used to quantify the ecosystem-level impacts of the perturbations. Initial states were vectors N of length S (chosen uniformly between S = 20 and S = 100) whose elements N i are the initial species abundance or biomass. Those were drawn from log-normal distributions with zero mean and standard deviation (uniformly chosen between 1/2 and 2), thus generating a wide range of communities while also mimicking empirical abundance distribution patterns. For each initial state, 500 perturbations were generated as vectors Δ N of length S (perturbed states are N + Δ N ) whose elements Δ N i were generated in the following way. First, for each species, we drew a value x i from a normal distribution with unit standard deviation and mean μ . For a given initial state, μ is a fixed value uniformly chosen between −0.3 and 0.3. It determines the qualitative consistency of population-level responses (more on this below). We then normalized the set of values x i by [12pt]{minimal} $${( {x}_{j}^{2})}^{1/2}$$ ( 1 S ∑ x j 2 ) 1 / 2 , which gave us a set of values y i that we used to define the actual response of species as 4 [12pt]{minimal} $$ {N}_{i}={{{}}} {y}_{i} {N}_{i}^{ }$$ Δ N i = intensity × y i × N i α For a given perturbation, its intensity was drawn uniformly between 0 and 0.1. We also allowed the impacts of perturbations to scale with the initial abundance (or biomass, in this toy model there is no difference) of species. For each perturbation, the biomass scaling exponent ( α ) was uniformly chosen between 0 and 1. When α = 1, the population response to the perturbation is, on average over the community, proportional to the species initial biomass. The other basic population-level feature that we considered is a notion of response diversity (i.e. whether the perturbation impacted most species positively or negatively). As mentioned above, this feature is set by the parameter μ . Indeed, if we define the population-level response consistency as 5 [12pt]{minimal} $${{{}}}=| -_{i} < 0\}}{S}|$$ bias = ∣ 1 2 − # { i , Δ N i < 0 } S ∣ ( # denotes the number of elements in a discrete set, here the set of species whose abundances are reduced by the perturbation); then, the expected fraction of negative population responses in the above expression is Φ(− μ ) where Φ( x ) is the cumulative function of a standard normal distribution. Ecosystem functions, which we used to “observe” the ecosystem-level response to perturbations, were represented by positive directions in an S -dimensional space, spanned by vectors φ whose elements φ i represent species’ per-capita functional contributions. For a given state N , its level of functioning is then f ( N ) = ∑ φ i N i (see Box ). The per-capita contributions φ i were drawn from a log-normal distribution with a standard deviation uniformly chosen between 0 and 1.3. When the standard deviation was small, the functions were broad as the per-capita contributions of each species were similar. When the standard deviation was large, however, the functions were more narrow, with a large variation in the per-capita contributions of each species to the function. Diversity metrics were taken from the family of Hill diversity that define the effective number of species as: 6 [12pt]{minimal} $${}^{q}D={({}_{i=1}^{S}{p}_{i}^{q})}^{1/(1-q)}$$ D q ( N ) = ∑ i = 1 S p i q 1 / ( 1 − q ) where S is richness, p i is the relative abundance (or biomass) of species i and q is the hill number that determines the sensitivity of the diversity index to rare or to abundant species. This general equation encompasses species richness ( q = 0), the Shannon index ( q = 1) and the Gini–Simpson index ( q = 2) , , , . To apply our geometrical framework to diversity observations we considered the directions spanned by their gradients (the vector of partial derivatives [12pt]{minimal} $$^{q}D}{ {N}_{i}}$$ ∂ q D ∂ N i ), evaluated at the initial state, which take the form q φ = ( q φ i ) with 7 [12pt]{minimal} $${}^{q}{ }_{i}=({p}_{i}^{q-1}-{}_{j=1}^{S}{p}_{j}^{q})$$ φ i q = q 1 − q p i q − 1 − ∑ j = 1 S p j q For each perturbation experiment and each pair of aggregate properties f , g —either two positive linear functions, or a diversity metric and a function (for two diversity metrics see Supplementary Note )—we checked the consistency of their responses. That is, we looked at the sign of f ( N + Δ N ) − f ( N ) and compared it to the sign of g ( N + Δ N ) − g ( N ). If they do not coincide, there is a qualitative mismatch between the two ways of observing the ecosystem’s response to the perturbation. For the simulations, 1000 communities (i.e. initial states) were generated and each one experienced 500 different perturbations. For Figs. E and , two ecosystem functions of varying broadness were generated for each community and used to observe the community-level responses to the perturbations. The angle between the directions defined by the functions was calculated, divided by π (Eq. , and plotted against the realised proportion of mismatches over the 500 perturbations, while recording the relative deviation from the prediction. For Fig. E all perturbations were unbiased at the population level, but for Fig. perturbations could vary in their population-level consistency. The angle between each pair of functions was also estimated using only the knowledge of their broadness based on their mismatches with biomass. For Fig. A, total biomass (positive direction whose elements are all 1) and Hill-Simpson ( 2 D ) were used to observe the ecosystem-level responses to the perturbations. The effective angle between total biomass and the state-dependent gradient of the diversity index, based on (Box Eq 8), was calculated, divided by π , and plotted against the actual proportion of mismatches. Equipped with our geometrical framework for understanding the variability of functional and biodiversity responses to perturbations we can return to the empirical data from Box to uncover novel insights. However, before we can use our framework to learn more about species contributions to ecosystem functions and about the structure of perturbations, we can first confirm that viewing functions as directions and equating their mismatches to their collinearity is a valid approach for a given dataset. To do this we can perform a validation test, formally described in Supplementary Note , where we try to predict the mismatches between two functions (i.e. their collinearity) based on the mismatches between all other pairs of functions. Indeed, if we know the respective angles that two chosen directions make with the remaining set of directions, we should be able to estimate, in a specific way , the angle between the chosen pair. This test involves matrix operations that can introduce artefacts into the results, meaning that an inconclusive test does not necessarily invalidate the application of our framework to a given dataset. However, a conclusive test—mismatches between two functions being well predicted by mismatches between all other pairs of functions—is very strong support for the view of aggregate properties as directions in state space and gives a green light for further exploration of the data using our geometrical arguments. To better understand species’ contributions to ecosystem functions we can use the mismatch data (i.e. matrix in Fig. ) to examine both the similarity of functions and their relative broadness. Firstly, the matrix of mismatches can be used as an adjacency matrix for a network that groups functions based on their similarity. A force-directed layout algorithm, such as the Kamada–Kawai path length cost-function , will generate networks where distance corresponds to the similarity of functions. Secondly, we can use total biomass (the broadest ecosystem function by definition) as a baseline to quantify the broadness of other functions. The angle between total biomass and other broad functions will be small so, over many perturbations with unbiased population-level effects, the proportion of mismatches will therefore be low. Narrower functions will have larger angles with biomass, which will result in higher proportions of mismatches (Fig. ). Working in reverse, we can use the proportion of mismatches between some function and total biomass (directly available from the data) to predict the broadness of that function. For a fair estimate of broadness, the proportion of mismatches between the function and biomass should be quantified over a large pool of perturbations that collectively have random effects. Here, we therefore do not consider perturbations of nutrients for ecosystem functions related to that nutrient—these perturbations have systematic effects rather than random effects—and we only consider cases where the proportion of mismatches between a function and biomass is based on at least twenty perturbations including at least five types of global change factors. So far we have used perturbations to gain insights into species contributions to ecosystem functions. However, we can also use the mismatches between functions to gain useful information about the population-level effects of the perturbations themselves. We can compare the proportion of mismatches between two ecosystem functions (e.g. total biomass and respiration) across different perturbations to quantify the relative response diversity of those perturbations. If perturbations have low response diversity (i.e. most species respond in the same direction), then perturbations will be biased in their directions in state-space towards the fully negative or fully positive areas of state-space (bottom left quadrant or top right quadrant of Fig. D, respectively), and would avoid the cones of mismatches for functions with positive directions. We can therefore use the proportion of mismatches for a given pair of functions to rank perturbations based on their response diversity. We can also use mismatch data to ask if a perturbation’s population-level effects are independent of biomass or if more abundant species have larger absolute changes in biomass (i.e. biomass scaling of a perturbation). If a perturbation causes the biomass of abundant species to decrease, total biomass will decrease but diversity will increase. If on the other hand, a perturbation causes the biomass of abundant species to increase, total biomass will increase but diversity will decrease. As such, when perturbations are scaled by biomass, there will be a higher proportion of mismatches between functions and diversity. As a result, we can use the proportion of mismatches between a function and a diversity metric to rank perturbations based on their biomass scaling. Here, we only made estimates for the response diversity or biomass scaling of perturbations if there were at least five shared observations of those perturbations for the relevant pair of aggregate properties in the dataset. A detailed tutorial, aimed at empirical ecologists interested in applying this geometrical framework to their data, is available at https://jamesaorr.github.io/community-properties-tutorial/ . The tutorial contains useful snippets of code and detailed descriptions of all stages of the analysis from (i) data preparation, (ii) validation test, (iii) exploring species contributions to functions, and (iv) exploring the population-level effects of perturbations. Further information on research design is available in the linked to this article. Supplementary Information Reporting Summary Transparent Peer Review file
A tale for two roles: Root-secreted methyl ferulate inhibits
b911ba37-9c72-4fc4-b484-173619de6c1e
11786473
Microbiology[mh]
Plant roots exude substantial amounts of primary and secondary metabolites that play vital roles in plant-plant or plant-microbial interactions . The profiles of the defensive root exudates demonstrate diversity and dynamic composition during the stress response . In response to adverse conditions, plant roots release a variety of specialized metabolites into the rhizosphere, including compounds that directly defend against soil-borne pathogens and those that activate interactions with rhizosphere microbes . For example, Arabidopsis roots present significantly greater exudation of amino acids and long-chain organic acids when challenged with the foliar pathogen Pseudomonas syringae pv. tomato DC3000 . T -cinnamic acid has been identified as a de novo synthesized and secreted compound in barley in response to the soil-borne pathogen Fusarium . The colonization of cucumber roots by Fusarium oxysporum increases the secretion of tryptophan, a compound that serves as a potent chemoattractant for the PGPR strain B. velezensis SQR9 . P. nicotianae is a species of oomycete that can infect hundreds of plants and cause many destructive soil-borne diseases, including tobacco black shank disease and soybean root and stem rot . Metabolites that have inhibitory or prebiotic functions to prevent P. nicotianae have been identified in the root exudates of resistant cultivars, including tartaric acid, ferulic acid, and lauric acid . The application of quasi-targeted metabolomics has revealed the potential of the root exudate metabolites isoliquiritigenin and lauric acid in managing P. nicotianae . For soil-borne diseases, implementing enhanced biological control methods through root exudates is an efficacious and environmentally benign approach. Methyl ferulate, a type of phenolic acid, is a novel, low-toxicity and low-cost antimicrobial agent that has been demonstrated to exhibit remarkable antibacterial activity and inhibit protein oxidation, controlling postharvest pathogens in fresh fruits and vegetables ( , Kong et al., 2024; Qiu et al., 2017). Methyl ferulate has been incorporated into fiber membranes during fish packaging, prolonging its antibacterial activity and preserving sea bass ( Li et al., 2021). Methyl ferulate markedly inhibits the mycelial growth of Shigella putrefaciens by disrupting the cell membrane structure and interfering with DNA replication and repair to achieve a sustained effect in the process of fish packaging (Li et al., 2021). The methyl ferulate compounds of noni leaves have been demonstrated to significantly contribute to antibacterial activity against Escherichia coli, Proteus vulgaris, and Staphylococcus aureus ( Zhang et al., 2016). Our previous study demonstrated that the root exudates of Gexin 3 (resistant cultivar) have a pronounced inhibitory effect on mycelial growth and spore germination . This finding suggested that there must be some antifungal metabolites in the root exudates. The metabolome analysis conducted in this study showed that more root exudate methyl ferulate was secreted form Gexin 3 cultivar following inoculation with P. nicotianae than in Xiaohuangjin 1025 (susceptible cultivar). Although methyl ferulate has been employed in postharvest pathogen control and food preservation, research on its secretion and function against plant pathogens is scarce. Thus, we sought to evaluate the effects of methyl ferulate on P. nicotianae and reveal its mechanism in the crosstalk between plants and soil microbes (Fig. ). Comparative metabolomic analysis of root exudates and their antifungal activity against P. nicotianae To investigate whether antifungal metabolites were present in the root exudates, we conducted a comparative analysis of the root exudates of Gexin 3 (resistant, R) and Xiaohuangjin 1025 (susceptible, S) following inoculation with P. nicotianae . A total of 304 metabolites were identified in the samples of root exudates via UPLC-MS/MS (Fig. A and Supplemental Data set 1). Heatmap analysis revealed a notable distinction between Gexin 3 and Xiaohuangjin1025, particularly following inoculation with P. nicotianae (Fig. A). A total of 40 root exudate metabolites were differentially expressed in Gexin 3, whereas 81 were differentially expressed in Xiaohuangjin 1025 (Fig. B). A total of 43 secreted root exudate metabolites were altered in the resistant cultivar following inoculation with P. nicotianae (triangles marked in Fig. B and listed in Supplemental Table 1). Further research demonstrated that most components were concentrated in lipids, phenolic acids, organic acids, and amino acids (Fig. C). Seven compounds (methyl ferulate, ferulate, adipic acid, punicic acid, malic acid, chlorogenic acid, and kinic acid) were selected for further analysis on the basis of the criteria of being harmless, synthesizable and inexpensive. The compounds were introduced into the medium at a concentration of 100 μg/mL, and metalaxyl mancozeb (100 μg/mL) served as the positive control. The results indicated that methyl ferulate significantly inhibited the mycelial growth of P. nicotianae (Fig. D, Supplemental Table 2). Therefore, further investigations of methyl ferulate were conducted on. Methyl ferulate exerts an obvious controlling effect on P. nicotianae To gain further insight into the effects of methyl ferulate on P. nicotianae , the activity of anti- P. nicotianae was evaluated via the gradient addition of methyl ferulate to the medium. The results demonstrated that methyl ferulate had a dose-dependent inhibitory effect (Fig. A). The median effect concentration (EC 50 ) of methyl ferulate against P. nicotianae was calculated as 67.51 µg/mL (y = 2.9065x—0.317, R 2 = 0.9942). This value was selected for further research, with the concentration set at 70 µg/mL. To determine the extent of the effect of methyl ferulate on P. nicotianae , we evaluated pathogen abundance at the three-day mark following the inoculation of P. nicotianae into sterile soil and natural soil. Methyl ferulate diminished pathogen abundance, a more pronounced phenomenon in natural soil (Fig. B and C). The effects of methyl ferulate on tobacco disease resistance were subsequently evaluated, and the disease index of tobacco and the pathogen density in the soil were determined. The findings indicated that methyl ferulate markedly increased tobacco resistance to black shank disease (Fig. D), and both the disease index (methyl ferulate treatment group < 10%, control group > 30%, P < 0.05) and pathogenic abundance notably decreased (Fig. E and F). These findings demonstrated that methyl ferulate has a significant inhibitory effect on the growth of P. nicotianae in both medium and soil and enhances tobacco resistance to black shank disease. Furthermore, our findings indicated that methyl ferulate has antagonistic effects on B. cinerea, P. aphanidermatum, P. sojae, C. lagenarium and F. oxysporum (Figure ). Our findings showed that methyl ferulate may have great potential for application in the control of plant fungal diseases. Methyl ferulate impairs NADH dehydrogenase function and promotes autophagy To gain further insight into the inhibitory mechanism of methyl ferulate on P. nicotianae , a transcriptome analysis was conducted. The results revealed 240 differentially expressed genes (DEGs) in P. nicotianae following the methyl ferulate treatment. Among these genes, 162 were up-regulated and 78 were down-regulated (Fig. A). Further analysis of the KEGG pathways revealed that the genes were significantly enriched in the oxidative phosphorylation, starch metabolism, and sucrose metabolism pathways (Fig. B). All members of the NADH dehydrogenase iron-sulfur protein 7 (Ndufs7) family, a subunit of mitochondrial complex I associated with oxidative phosphorylation, presented increased expression in the methyl ferulate treatment profile (Fig. C). The expression of these genes was determined via qPCR, confirming the increasing trends observed in the previous analysis (Fig. D-4H). These findings prompted us to focus on NADH dehydrogenase activity. The results demonstrated a significant decrease in NADH dehydrogenase activity following methyl ferulate treatment (Fig. I). The NAD + and NAD + /NADH ratios significantly decreased in response to methyl ferulate treatment (Fig. J). These findings indicate the presence of dysfunction in NADH dehydrogenase and mitochondrial complex I. Complex I is essential for ATP synthesis, and our results detected a reduction in the ATP content of P. nicotianae cells following methyl ferulate treatment (Fig. K). In addition, the assay conducted with the fluorescence probe JC-1 facilitated the observation of the transformation of JC-1 from red (JC-1 aggregates) to green (JC-1 monomers) (Fig. L). This result indicated that the mitochondrial membrane potential was reduced following methyl ferulate treatment. Furthermore, methyl ferulate induced autophagy in P. nicotianae cells, and the autophagy staining assay with MDC revealed a notable increase in green fluorescence following methyl ferulate treatment, indicating a distinctive alteration in autophagy (Fig. M). Transmission electron microscopy (TEM) was used to observe the structure and distribution of autophagosomes, and the methyl ferulate treated samples presented more distinctive double-layer and autophagosomal structures (Fig. N). The Biochemical and ultrastructural results demonstrated that the autophagy pathway was significantly activated. These findings suggest that methyl ferulate impedes the growth of P. nicotianae by disrupting the function of NADH dehydrogenase, reducing ATP synthesis, decreasing the mitochondrial membrane potential, and stimulating autophagy. Methyl ferulate recruits beneficial Bacillus species to inhibit P. nicotianae infection The application of methyl ferulate resulted in a notable reduction in pathogen abundance in both sterile and natural soil samples. However, the efficacy of methyl ferulate was more pronounced in natural soils (< 10 8 copies per gram of soil) than in sterile soils (> 10 8 copies per gram soil) as shown in Fig. B and C. This prompted us to investigate the potential involvement of microorganisms in this process, then we conducted methyl ferulate treatment after the inoculation of P. nicotianae in a nursery field of tobacco black shank disease. The bacterial community composition in the rhizosphere soil was monitored via Illumina MiSeq sequencing of the 16S rRNA gene in the CK group and methyl ferulate treatment group. PCoA at the OTU level revealed a clear separation between the methyl ferulate-treated and control samples (Fig. A, ANOSIM, R = 0.78, P = 0.001). The rhizosphere bacterial microbiomes in the methyl ferulate-treated group presented a greater number of connected co-occurrence networks, with shorter path lengths, more edges, and more nodes (Fig. B and Supplemental Table 3). This result indicates that the complexity of the rhizosphere bacterial community was greater in the methyl ferulate-treated group. To illustrate the changes in community composition and identify the affected species at the genus level, a Wilcoxon rank-sum test was conducted between the methyl ferulate treatment group and the control group after inoculation with P. nicotianae in the nursery field of tobacco black shank disease. Among the top 15 most altered genera, Bacillus presented the highest abundance (relative abundance > 10%, Student’s t -test p -value < 0.01) in the methyl ferulate treatment group (Fig. C). These findings indicate that methyl ferulate increased the abundance of Bacillus . Given the high abundance of Bacillus in the methyl ferulate-treated rhizosphere communities, we analyzed the relative abundance and evolutionary relationships of each individual OTU within this genus. 16 OTUs were identified as belonging to the Bacillus genus and were among the 300 most abundant OTUs. These OTUs clustered phylogenetically, and OTU5403 was the most abundant and enriched taxon in the rhizosphere soil, and its abundance increased significantly in the methyl ferulate treatment group (Fig. D). To further validate the potential inhibitory effect of Bacillus on black shank disease, bacteria were isolated from rhizosphere soil samples subjected to methyl ferulate treatment. Thirteen Bacillus strains were isolated and their antagonistic functions against P. nicotianae were subsequently detected (Supplemental Table 4). The results demonstrated that most of these Bacillus strains inhibited P. nicotianae growth, with disease inhibition ranging from 33.77% to 88.32% (Supplemental Table 4). The 16S rRNA gene sequences of these strains were subsequently compared with those of OTU5403. The Priestia megaterium NM17 strain exhibited 99.53% similarity (Figure ). Before inoculation with P. nicotianae , the NM17 strain was introduced into the soil and the effects of disease inhibition were identified. The results indicated a significant disease inhibition and a concomitant decrease in the disease index (Fig. E and F). In conclusion, these results demonstrate that methyl ferulate can also facilitate the recruitment of Bacillus species to inhibit P. nicotianae infection. Overexpression of NtCOMT10 increased methyl ferulate secretion and reduced P. nicotianae abundance To gain a deeper understanding of the function of methyl ferulate, we attempted to regulate its synthesis in tobacco. A literature review revealed that caffeic acid O-methyltransferases may be involved in methyl ferulate synthesis. Twelve NtCOMTs were identified in tobacco plants using BLASTP. The phylogenetic tree revealed that these members were classified into two families, with Group 1 further subdivided into two subfamilies (Figure A). Analysis of gene motifs and structures corroborated the findings of the phylogenetic grouping (Figure A). The expression levels of these genes were previously detected in the transcriptome profiles of the resistant cultivar Beihart1000-1 (R) and the susceptible cultivar Xiaohuangjin 1025 (S) at 0 and 60 h post infection. Heatmap analysis revealed that only Ntab0849090.1 and Ntab0513830.1, the two members of Group 1b, exhibited increased expression levels following inoculation with P. nicotianae (Figure B). Ntab0849090.1 exhibited a more pronounced change following inoculation with P. nicotianae and was renamed NtCOMT10 in accordance with its chromosomal location. To determine whether NtCOMT10 plays a role in methyl ferulate synthesis, we overexpressed NtCOMT10 in tobacco K326. Two independent high-overexpression lines with sufficient seeds, OE7 and OE18 , were selected for further analysis (Figure C). The root exudates were subsequently collected, and the methyl ferulate content was measured via UHPLC-MS. Compared with that in the wild-type, the methyl ferulate production in OE7 and OE18 increased by 168% and 221.09%, respectively, on basis of the peak areas (15,941 mv·s, 20,891 mv·s to 9,449 mv·s) compared with the wild-type (Fig. A). The methyl ferulate content detected in the root exudates was consistent with the fold-expression of NtCOMT10 . These results demonstrated that NtCOMT10 plays a vital role in methyl ferulate synthesis. Compared with the wild-type, the overexpression lines presented superior survival rates and disease indices relative following inoculation with P. nicotianae (Fig. B-C). Moreover, the relative abundance of P. nicotianae was determined in NtCOMT10 OE-7, OE-18, and wild-type plants. The relative abundance of P. nicotianae was obviously reduced in the rhizosphere soils of overexpressing lines (Fig. D). The bacterial community composition in the rhizosphere soil of the wild-type and NtCOMT10 overexpression plants was subsequently monitored. Significant differences in the composition of the tobacco rhizosphere bacterial community were observed among the methyl ferulate-treated, wild-type, OE-7, and OE-18 groups under normal conditions and pathogen inoculation treatment. This was determined via principal coordinate analysis (PCoA) at the operational taxonomic unit (OTU) level (Fig. E and 6F; ANOSIM, R = 0.59, P = 0.001, normal condition; ANOSIM, R = 0.72, P = 0.001, pathogen inoculation). However, no significant difference in Shannon diversity (Figure ) was detected. In the absence of external stimuli from P. nicotianae , the composition of the rhizosphere bacterial community in the methyl ferulate treatment group exhibited a distinctive profile compared with those of the other groups on the horizontal axis (Fig. E). Furthermore, following pathogen inoculation, pronounced differences were evident between the WT group and the remaining groups (Fig. F and Figure ). These findings suggest that methyl ferulate strongly influences on the composition of the bacterial community in the tobacco rhizosphere. At the genus level, a significant increase in the relative abundance of Bacillus was observed in the methyl ferulate treatment and overexpression line groups compared with the control wild-type group (Fig. G), which is consistent with the results observed in the field (Fig. C). Furthermore, we identified OTU9124, which exhibited 100% similarity to OTU5403 and 99.53% similarity to the NM17 strain (Figure ), as the most similar OTU in the pot experiment profiles and quantified its relative abundance. A notable increase in the relative abundance of OTU9124 was observed in the overexpression and methyl ferulate treatment groups (Fig. H). In conclusion, overexpression of NtCOMT10 increased the secretion of methyl ferulate and reduced P. nicotianae abundance. To investigate whether antifungal metabolites were present in the root exudates, we conducted a comparative analysis of the root exudates of Gexin 3 (resistant, R) and Xiaohuangjin 1025 (susceptible, S) following inoculation with P. nicotianae . A total of 304 metabolites were identified in the samples of root exudates via UPLC-MS/MS (Fig. A and Supplemental Data set 1). Heatmap analysis revealed a notable distinction between Gexin 3 and Xiaohuangjin1025, particularly following inoculation with P. nicotianae (Fig. A). A total of 40 root exudate metabolites were differentially expressed in Gexin 3, whereas 81 were differentially expressed in Xiaohuangjin 1025 (Fig. B). A total of 43 secreted root exudate metabolites were altered in the resistant cultivar following inoculation with P. nicotianae (triangles marked in Fig. B and listed in Supplemental Table 1). Further research demonstrated that most components were concentrated in lipids, phenolic acids, organic acids, and amino acids (Fig. C). Seven compounds (methyl ferulate, ferulate, adipic acid, punicic acid, malic acid, chlorogenic acid, and kinic acid) were selected for further analysis on the basis of the criteria of being harmless, synthesizable and inexpensive. The compounds were introduced into the medium at a concentration of 100 μg/mL, and metalaxyl mancozeb (100 μg/mL) served as the positive control. The results indicated that methyl ferulate significantly inhibited the mycelial growth of P. nicotianae (Fig. D, Supplemental Table 2). Therefore, further investigations of methyl ferulate were conducted on. To gain further insight into the effects of methyl ferulate on P. nicotianae , the activity of anti- P. nicotianae was evaluated via the gradient addition of methyl ferulate to the medium. The results demonstrated that methyl ferulate had a dose-dependent inhibitory effect (Fig. A). The median effect concentration (EC 50 ) of methyl ferulate against P. nicotianae was calculated as 67.51 µg/mL (y = 2.9065x—0.317, R 2 = 0.9942). This value was selected for further research, with the concentration set at 70 µg/mL. To determine the extent of the effect of methyl ferulate on P. nicotianae , we evaluated pathogen abundance at the three-day mark following the inoculation of P. nicotianae into sterile soil and natural soil. Methyl ferulate diminished pathogen abundance, a more pronounced phenomenon in natural soil (Fig. B and C). The effects of methyl ferulate on tobacco disease resistance were subsequently evaluated, and the disease index of tobacco and the pathogen density in the soil were determined. The findings indicated that methyl ferulate markedly increased tobacco resistance to black shank disease (Fig. D), and both the disease index (methyl ferulate treatment group < 10%, control group > 30%, P < 0.05) and pathogenic abundance notably decreased (Fig. E and F). These findings demonstrated that methyl ferulate has a significant inhibitory effect on the growth of P. nicotianae in both medium and soil and enhances tobacco resistance to black shank disease. Furthermore, our findings indicated that methyl ferulate has antagonistic effects on B. cinerea, P. aphanidermatum, P. sojae, C. lagenarium and F. oxysporum (Figure ). Our findings showed that methyl ferulate may have great potential for application in the control of plant fungal diseases. To gain further insight into the inhibitory mechanism of methyl ferulate on P. nicotianae , a transcriptome analysis was conducted. The results revealed 240 differentially expressed genes (DEGs) in P. nicotianae following the methyl ferulate treatment. Among these genes, 162 were up-regulated and 78 were down-regulated (Fig. A). Further analysis of the KEGG pathways revealed that the genes were significantly enriched in the oxidative phosphorylation, starch metabolism, and sucrose metabolism pathways (Fig. B). All members of the NADH dehydrogenase iron-sulfur protein 7 (Ndufs7) family, a subunit of mitochondrial complex I associated with oxidative phosphorylation, presented increased expression in the methyl ferulate treatment profile (Fig. C). The expression of these genes was determined via qPCR, confirming the increasing trends observed in the previous analysis (Fig. D-4H). These findings prompted us to focus on NADH dehydrogenase activity. The results demonstrated a significant decrease in NADH dehydrogenase activity following methyl ferulate treatment (Fig. I). The NAD + and NAD + /NADH ratios significantly decreased in response to methyl ferulate treatment (Fig. J). These findings indicate the presence of dysfunction in NADH dehydrogenase and mitochondrial complex I. Complex I is essential for ATP synthesis, and our results detected a reduction in the ATP content of P. nicotianae cells following methyl ferulate treatment (Fig. K). In addition, the assay conducted with the fluorescence probe JC-1 facilitated the observation of the transformation of JC-1 from red (JC-1 aggregates) to green (JC-1 monomers) (Fig. L). This result indicated that the mitochondrial membrane potential was reduced following methyl ferulate treatment. Furthermore, methyl ferulate induced autophagy in P. nicotianae cells, and the autophagy staining assay with MDC revealed a notable increase in green fluorescence following methyl ferulate treatment, indicating a distinctive alteration in autophagy (Fig. M). Transmission electron microscopy (TEM) was used to observe the structure and distribution of autophagosomes, and the methyl ferulate treated samples presented more distinctive double-layer and autophagosomal structures (Fig. N). The Biochemical and ultrastructural results demonstrated that the autophagy pathway was significantly activated. These findings suggest that methyl ferulate impedes the growth of P. nicotianae by disrupting the function of NADH dehydrogenase, reducing ATP synthesis, decreasing the mitochondrial membrane potential, and stimulating autophagy. The application of methyl ferulate resulted in a notable reduction in pathogen abundance in both sterile and natural soil samples. However, the efficacy of methyl ferulate was more pronounced in natural soils (< 10 8 copies per gram of soil) than in sterile soils (> 10 8 copies per gram soil) as shown in Fig. B and C. This prompted us to investigate the potential involvement of microorganisms in this process, then we conducted methyl ferulate treatment after the inoculation of P. nicotianae in a nursery field of tobacco black shank disease. The bacterial community composition in the rhizosphere soil was monitored via Illumina MiSeq sequencing of the 16S rRNA gene in the CK group and methyl ferulate treatment group. PCoA at the OTU level revealed a clear separation between the methyl ferulate-treated and control samples (Fig. A, ANOSIM, R = 0.78, P = 0.001). The rhizosphere bacterial microbiomes in the methyl ferulate-treated group presented a greater number of connected co-occurrence networks, with shorter path lengths, more edges, and more nodes (Fig. B and Supplemental Table 3). This result indicates that the complexity of the rhizosphere bacterial community was greater in the methyl ferulate-treated group. To illustrate the changes in community composition and identify the affected species at the genus level, a Wilcoxon rank-sum test was conducted between the methyl ferulate treatment group and the control group after inoculation with P. nicotianae in the nursery field of tobacco black shank disease. Among the top 15 most altered genera, Bacillus presented the highest abundance (relative abundance > 10%, Student’s t -test p -value < 0.01) in the methyl ferulate treatment group (Fig. C). These findings indicate that methyl ferulate increased the abundance of Bacillus . Given the high abundance of Bacillus in the methyl ferulate-treated rhizosphere communities, we analyzed the relative abundance and evolutionary relationships of each individual OTU within this genus. 16 OTUs were identified as belonging to the Bacillus genus and were among the 300 most abundant OTUs. These OTUs clustered phylogenetically, and OTU5403 was the most abundant and enriched taxon in the rhizosphere soil, and its abundance increased significantly in the methyl ferulate treatment group (Fig. D). To further validate the potential inhibitory effect of Bacillus on black shank disease, bacteria were isolated from rhizosphere soil samples subjected to methyl ferulate treatment. Thirteen Bacillus strains were isolated and their antagonistic functions against P. nicotianae were subsequently detected (Supplemental Table 4). The results demonstrated that most of these Bacillus strains inhibited P. nicotianae growth, with disease inhibition ranging from 33.77% to 88.32% (Supplemental Table 4). The 16S rRNA gene sequences of these strains were subsequently compared with those of OTU5403. The Priestia megaterium NM17 strain exhibited 99.53% similarity (Figure ). Before inoculation with P. nicotianae , the NM17 strain was introduced into the soil and the effects of disease inhibition were identified. The results indicated a significant disease inhibition and a concomitant decrease in the disease index (Fig. E and F). In conclusion, these results demonstrate that methyl ferulate can also facilitate the recruitment of Bacillus species to inhibit P. nicotianae infection. To gain a deeper understanding of the function of methyl ferulate, we attempted to regulate its synthesis in tobacco. A literature review revealed that caffeic acid O-methyltransferases may be involved in methyl ferulate synthesis. Twelve NtCOMTs were identified in tobacco plants using BLASTP. The phylogenetic tree revealed that these members were classified into two families, with Group 1 further subdivided into two subfamilies (Figure A). Analysis of gene motifs and structures corroborated the findings of the phylogenetic grouping (Figure A). The expression levels of these genes were previously detected in the transcriptome profiles of the resistant cultivar Beihart1000-1 (R) and the susceptible cultivar Xiaohuangjin 1025 (S) at 0 and 60 h post infection. Heatmap analysis revealed that only Ntab0849090.1 and Ntab0513830.1, the two members of Group 1b, exhibited increased expression levels following inoculation with P. nicotianae (Figure B). Ntab0849090.1 exhibited a more pronounced change following inoculation with P. nicotianae and was renamed NtCOMT10 in accordance with its chromosomal location. To determine whether NtCOMT10 plays a role in methyl ferulate synthesis, we overexpressed NtCOMT10 in tobacco K326. Two independent high-overexpression lines with sufficient seeds, OE7 and OE18 , were selected for further analysis (Figure C). The root exudates were subsequently collected, and the methyl ferulate content was measured via UHPLC-MS. Compared with that in the wild-type, the methyl ferulate production in OE7 and OE18 increased by 168% and 221.09%, respectively, on basis of the peak areas (15,941 mv·s, 20,891 mv·s to 9,449 mv·s) compared with the wild-type (Fig. A). The methyl ferulate content detected in the root exudates was consistent with the fold-expression of NtCOMT10 . These results demonstrated that NtCOMT10 plays a vital role in methyl ferulate synthesis. Compared with the wild-type, the overexpression lines presented superior survival rates and disease indices relative following inoculation with P. nicotianae (Fig. B-C). Moreover, the relative abundance of P. nicotianae was determined in NtCOMT10 OE-7, OE-18, and wild-type plants. The relative abundance of P. nicotianae was obviously reduced in the rhizosphere soils of overexpressing lines (Fig. D). The bacterial community composition in the rhizosphere soil of the wild-type and NtCOMT10 overexpression plants was subsequently monitored. Significant differences in the composition of the tobacco rhizosphere bacterial community were observed among the methyl ferulate-treated, wild-type, OE-7, and OE-18 groups under normal conditions and pathogen inoculation treatment. This was determined via principal coordinate analysis (PCoA) at the operational taxonomic unit (OTU) level (Fig. E and 6F; ANOSIM, R = 0.59, P = 0.001, normal condition; ANOSIM, R = 0.72, P = 0.001, pathogen inoculation). However, no significant difference in Shannon diversity (Figure ) was detected. In the absence of external stimuli from P. nicotianae , the composition of the rhizosphere bacterial community in the methyl ferulate treatment group exhibited a distinctive profile compared with those of the other groups on the horizontal axis (Fig. E). Furthermore, following pathogen inoculation, pronounced differences were evident between the WT group and the remaining groups (Fig. F and Figure ). These findings suggest that methyl ferulate strongly influences on the composition of the bacterial community in the tobacco rhizosphere. At the genus level, a significant increase in the relative abundance of Bacillus was observed in the methyl ferulate treatment and overexpression line groups compared with the control wild-type group (Fig. G), which is consistent with the results observed in the field (Fig. C). Furthermore, we identified OTU9124, which exhibited 100% similarity to OTU5403 and 99.53% similarity to the NM17 strain (Figure ), as the most similar OTU in the pot experiment profiles and quantified its relative abundance. A notable increase in the relative abundance of OTU9124 was observed in the overexpression and methyl ferulate treatment groups (Fig. H). In conclusion, overexpression of NtCOMT10 increased the secretion of methyl ferulate and reduced P. nicotianae abundance. This study revealed significant differences in root exudate metabolites between resistant and susceptible tobacco cultivars. The black shank-resistant cultivar Gexin 3 secreted greater quantities of methyl ferulate root exudates than did the susceptible cultivar Xiaohuangjin 1025 following inoculation with P. nicotianae (Fig. , Fig. and Supplemental Data set 1). The findings of this study indicate that root exudate methyl ferulate can increase resistance to tobacco black shank disease in two ways: one is by directly inhibiting the growth of P. nicotianae by disrupting the function of NADH dehydrogenase, reducing ATP synthesis, and promoting the progression of autophagy (Figs. and Fig. ); the other is the indirect effect of recruiting beneficial microbes to inhibit the pathogen (Figs. and Fig. ). Both the application of methyl ferulate directly into the soil and the overexpression of the related gene NtCOMT10 resulted in a notable increase in disease suppression in tobacco plants and a considerable reduction in the abundance of soil pathogens (Figs. and Fig. ). Therefore, we propose the mechanism by which plants enhance disease resistance is through increasing the secretion of several root exudate metabolites, which inhibit pathogens and recruit beneficial microbes (Fig. ). In conclusion, this study offers a novel perspective on managing of soil-borne fungal diseases by regulating or applying root exudate methyl ferulate to impede pathogen-and tap microbiome-encoded functions. Root exudates are the initial line of defense against phytopathogenic infections in the root system. Methyl ferulate is a member of a class of phenolic acids that have been demonstrated to play roles in plant disease resistance. In the study of Li et al . , the total phenolic acid content was found to be significantly higher in the mid-resistant peanut cultivar Quanhua-7 than in the susceptible cultivar Ganhua-5 . Our results demonstrated that root exudate methyl ferulate plays a role in plant fungal disease resistance. Methyl ferulate markedly inhibited the mycelial growth of P. nicotianae by increasing the expression of Ndufs7 , impeding the function of NADH dehydrogenase and promoting the progression of autophagy (Fig. ). As demonstrated by previous research on the classical inhibitor of mitochondrial complex I, rotenone has been shown to increase the mitochondrial superoxide and protein levels of the Ndufs7 subunit in a human cell line . These findings corroborate our study conclusions, in which we reveal that methyl ferulate also has antagonistic effects on B. cinerea , P. aphanidermatum , and P. sojae (Figure ). These findings expand the potential applications of methyl ferulate in the control of plant fungal diseases and highlight its extensive application prospects. The root exudate methyl ferulate has the potential to act as a "prebiotic,” which could help control soil-borne diseases. The external application of methyl ferulate in soils and the overexpression of NtCOMT10 , which regulates methyl ferulate biosynthesis in tobacco, resulted more complex inter-root bacterial communities and a greater abundance of beneficial Bacillus species (Figs. and Fig. ). Furthermore, the Priestia megaterium NM17 strain was significantly enriched in the rhizospheres of the treatment samples and overexpression lines (Figs. D and H). Inoculation with the NM17 strain enhanced black shank disease resistance, as evidenced by the data presented in Figs. E and F. This study provides insight into the mechanisms by which plant root exudates promote the ability of roots to evade infection by pathogenic bacteria by suppressing pathogens and recruiting beneficial microorganisms. However, the chemical communication between microbes and plant roots is bidirectional, just like plant-secreted chemicals could precisely manage plant beneficial microbes, and beneficial microbes could successfully colonize through chemical communication . For example, gamma-aminobutyric acid has been demonstrated to function as an effective chemoattractant, inducing root colonization by Pseudomonas putida KT2440. This process ultimately protects plant hosts against phytopathogens, as it promotes systemic resistance . To understand the methyl ferulate network comprehensively, future studies should investigate the mechanism by which beneficial microbes are recruited by methyl ferulate and associated communication from microbes to roots. The COMT gene family is conserved in plants and has been demonstrated to be involved in lignin content, melatonin production, and drought tolerance . The upregulation of methyl ferulate by the related gene NtCOMT10 markedly enhanced resistance to plant disease (Fig. A-D). Considering that methyl ferulate exhibits antagonistic properties against many plant pathogens (Fig. A and Figure ), we postulate that the increase in COMT gene expression may increase disease resistance in certain species. In species with immature genetic transformation systems, the exogenous application of methyl ferulate may have a preventive effect. Given the established synthesis technology for methyl ferulate and the dosage (70 mg/mL) used to control tobacco black shank disease, the cost of methyl ferulate application is relatively low (approximately 50 dollars per hectare), making it a potentially viable option for sustainable agriculture. Our findings indicated that methyl ferulate, a component of root exudates, can directly inhibit pathogens and recruit partners to inhibit P. nicotianae infection in tobacco. These findings suggest a novel approach for controlling soil-borne fungal diseases through the application of root exudate methyl ferulate. Furthermore, the regulated COMT gene family could facilitate holobiont-level breeding, enabling the matching of plant genetics with associated microbes. Plant materials and oomycete strain Tobacco cultivars (Gexin 3, Xiaohuangjin 1025 and K326) were obtained from the National Medium-term GenBank of Tobacco and planted in a field in Jimo, Shandong Province, China. In the greenhouse experiment, Gexin 3, Xiaohuangjin 1025, NtCOMT10 overexpression lines, and the wild-type (K326) were cultured in 10 cm × 10 cm pots until five true leaves stage (26 ℃, 70% relative humidity, 300 μmol m −2 s −1 , 12 h light/12 h dark). In the field experiments, plants were planted and cultivated using a routine method until the mature stage (described previously in ) in a nursery field of tobacco black shank disease located at Jimo Tobacco Resources and Experimental Station of the Chinese Academy of Agricultural Sciences. The plants were subsequently inoculated with millet containing P. nicotianae . The soil for the pot experiments was collected in early June from a healthy tobacco field at Jimo Tobacco Resources and the Environmental Station of the Chinese Academy of Agricultural Sciences (36° 27′ N, 120° 35′ E), Qingdao, China. The soil was collected from a depth of 5–20 cm. All the experimental soils were sieved (< 4 mm) and homogenized thoroughly before use. The soils were stored in a cool, ventilated, dry place for 7 d at 25 °C before being used in the experiments. P. nicotianae race 0 strain JM01 was obtained from the Integrated Pest Management Key Laboratory of China Tobacco, Qingdao, China. P. nicotianae JM01 was cultured in oatmeal agar (OA) medium at 26 °C. Plant inoculation, and metabolite extraction and analysis Gexin 3 and Xiaohuangjin 1025 were used in root exudate metabolome analysis, and each comprised three biological replicates (each replicate with 10 seedlings). The seedlings were inoculated with or without P. nicotianae at the stage of five true leaves. P. nicotianae for inoculation was prepared via culture on a medium prepared from millet. The millet seeds were boiled until half of the seed coat ruptured, and then the medium was filtered and sterilized for 20 min at 121 °C. After cooling, a piece of P. nicotianae was transferred to millet medium, cultured at 26 °C for 14 d. Each seedling was inoculated with 2 g of millet containing P. nicotianae per 200 g soil in each pot. After 5 d, the root exudates of these seedlings were collected. The seedlings were dug, and the attached soil was washed away. The roots were immersed in 100 mL of distilled water and cultured under normal conditions for 12 h. The root exudate samples were collected, frozen in liquid nitrogen, and stored at -80 °C. Sample preparation, extract analysis, metabolite identification, and quantification were performed at Wuhan MetWare Biotechnology Co., Ltd. ( www.metware.cn ) following standard procedures . Approximately 9 mL of freeze-dried root exudate powder was placed in 0.3 mL of 70% aqueous methanol at 4 °C overnight. The extract was centrifuged at 10,000 × g for 10 min and the supernatant was filtered through a 0.22-μm microporous membrane for further LC‒MS/MS analysis. Metabolites with VIP ≥ 1 and fold change ≥ 2 or ≤ 0.5 were considered as differential metabolites. Assays of antimicrobial activity Seven compounds selected from different root exudate metabolites were added to sterilized OA medium with a final concentration of 100 μg/mL for analysis of their inhibitory activities against P. nicotianae . The compounds were methyl ferulate (CAS 2309–07-1; powder; MACKLIN, China), ferulate (CAS 1135–24-6; powder; MACKLIN, China), adipic acid (CAS 124–04-9; powder; MACKLIN, China), punicic acid (CAS 544–72-9; liquid; MACKLIN, China), malic acid (CAS 6915–15-7; powder; MACKLIN, China), chlorogenic acid (CAS 327–97-9; powder; MACKLIN, China), and kinic acid (CAS 77–95-2; powder; MACKLIN, China). A diameter of 6 mm P. nicotianae disk was inoculated in OA medium, and cultured at 28 °C for three days. The diameters of the P. nicotianae mycelium ( n = 3) were measured, and the inhibition rates of these compounds were calculated. The inhibitory activity methyl ferulate against eight plant pathogenic fungi ( P. nicotianae , B. cinerea , R. zeae , A.alternate , C. lagenarium , F. oxysporum, P. sojae ) was assayed as described previously. In brief, 0.6 cm diameter mycelium disk of phytopathogens was inoculated in the center of OA medium with a methyl ferulate final concentration of 70 μg/mL, respectively. The inhibition rate of methyl ferulate against phytopathogens was calculated by measuring the diameter of the mycelium ( n = 3), which was cultured at 28 °C for three days. The antifungal activities of the isolated Bacillus strains were determined as described previously . Briefly, the isolated Bacillus strains were cultured in liquid nutrient broth (NB) medium for 12 h at 28 °C. The disk of P. nicotianae was placed in the center of the OA medium and 5 μL of Bacillus strains was inoculated on both sides of the disk ( n = 3 for each strain). After culture at 28 °C for 3 days, the diameter of each colony was measured, and the growth inhibition rate of mycelia was calculated as follows: colony diameter (cm) = colony measured diameter—cluster diameter (0.6 cm); mycelial inhibition  = (normal colony diameter—treated colony diameter)/normal colony diameter × 100. EC 50 value of methyl ferulate against P. nicotianae The EC 50 value of methyl ferulate against P. nicotianae was calculated from mycelial growth inhibition, which was determined on the basis of a linear regression between the concentration and percentage inhibition . The inhibitory activity of methyl ferulate against P. nicotianae was determined via a previously described method . Methyl ferulate was dissolved in DMSO and added to sterilized OA medium at 0, 20, 40, 60, 80, or 100 μg/mL. A 0.6 cm P. nicotianae colony was placed in the center of the OA culture for 3 days at 28 °C, and the colony diameter and mycelial inhibition were calculated as follows: colony diameter (cm) = colony measured diameter—cluster diameter (0.6 cm); mycelial inhibition  = (normal colony diameter—treated colony diameter) / normal colony diameter × 100. Each experiment included three replicates. P. nicotianae pathogenicity test The symptoms of tobacco blank shank disease in the methyl ferulate treatment lines or overexpression plants were compared with those of the wild-type, as described previously . Briefly, seeds of Xiaohuangjin 1025 tobacco were germinated and transplanted into pots until the five-leaf stage. Each seedling was inoculated with 2 g of millet containing P. nicotianae per 200 g of soils in each pot. The seedlings were treated with 70 μg/mL methyl ferulate for 20 mL per pot or with DMSO (each treatment was repeated 3 times with each replicate containing 12 tobacco plants), and the disease index was monitored daily for at least 10 d. Plant disease rate  = (number of infected plants/total number of plants surveyed) × 100%; plant disease index = [∑ (number of diseased plants at each level × number of corresponding levels) / (total number of plants surveyed × highest level value)] × 100. Determination of P. nicotianae abundance by qPCR Soil or tobacco rhizosphere soil was collected after inoculation with P. nicotianae for 5 days, total DNA was extracted, and total DNA was purified from the soil samples via a soil genomic DNA extraction kit (TIANGEN, China). A qPCR assay with P. nicotianae- specific primers (Supplementary Table 5) was performed, and the density of P. nicotianae was quantified. Standard curves were generated using tenfold serial dilutions of a plasmid containing the P. nicotianae 18S rRNA gene. Each assay was performed in triplicate. Transcriptomic analysis via RNA-seq and validation of DEG using qPCR RNA samples were extracted using a total RNA isolation kit (Vazyme, RC411-01) and sequenced using the NovaSeq X Plus platform (Majorbio Bio-Pharm Biotechnology Co. Ltd., Shanghai, China). The data were analyzed using the online platform of the Majorbio Cloud Platform (www.majorbio.com) . The raw sequencing reads were filtered using FASTP and mapped to Phytophthora nicotianae (GCA_001483015.1) using HISAT2 (version 2.0.5) with the default parameters. DEGs (defined by expression change fold ≥ 2 or ≤ 0.5 with P < 0.05) were analyzed using DESeq2 (version 1.20.0). The expression levels of Ndufs7 were determined using qPCR, and the primers used for qPCR are listed in Supplementary Table 5. MDC (monodansylcadaverine, MDC) staining assay The MDC staining assay was performed as described previously . In brief, the fluorescent probe MDC (10,121–91-2, MedChemExpress) was diluted with PBS to a concentration of 100 μM. The mycelia of P. nicotianae were collected under normal conditions and under conditions of methyl ferulate treatment in MDC solution and incubated for 15 min at 25 °C in the dark. After the samples were washed at least three times in PBS, the fluorescence signal was detected using confocal microscopy (FV1000, OLYMPUS). ATP content assay The ATP content of P. nicotianae was determined using an ATP content assay kit (BC0300; Solarbio, China). Briefly, 0.1 g of P. nicotianae mycelium was harvested and extracted with 1 mL of extraction solution buffer. The supernatant (0.5 mL) was centrifuged at 8,000 × g for 10 min at 4 °C and mixed with chloroform (0.5 mL). The mixture was centrifuged at 8,000 × g at 4 °C for 3 min, and 0.1 mL of the supernatant was transferred to a working solution for further absorbance measurements. Mitochondrial complex I activity assay Mitochondrial complex I/NADH-CoQ reductase activity was determined using a mitochondrial respiratory chain complex I/NADH-CoQ reductase activity assay kit (BC0510; Solarbio, China). Briefly, 0.1 g of mycelium sample was suspended in 1 mL of extraction buffer 1. The supernatant was centrifuged at 600 × g for 10 min at 4 °C and again at 11,000 × g for 15 min. Extraction buffers 1 and 2 were added to the precipitate for ultrasonic crushing. The obtained mixture was subjected to further measurements. NAD + /NADH content assay The NAD + /NADH content of P. nicotianae was determined using an NAD + /NADH quantification assay kit (S0176S; Beyotime, China). Briefly, a 0.1 g mycelium sample was extracted directly into 1 mL extraction buffer. The supernatant (10 μL) was centrifuged at 3,000 × g for 10 min and diluted to 50 μL. After mixing with 100 μL of detection antibody-HRP, the mixture was incubated at 37 °C for 60 min, and the absorbance was detected at 450 nm. Transmission electron microscopy (TEM) analysis The cell ultrastructure of P. nicotianae was observed via transmission electron microscopy (TEM). In brief, P. nicotianae was cultured on medium supplemented with or without 70 μg/mL methyl ferulate at 28 °C for 3 days. The mycelia of P. nicotianae were harvested and fixed in sterilized 2.5% (v/v) glutaraldehyde solution for 4 h, then 1% (v/v) osmium acid for 2 h. The samples were subsequently dehydrated in 50%-70%-90%-100% acetone twice for 15 min respectively. After polymerizing with Epon812 embedding agent, the resin blocks were sliced at 70 nm by an ultrathin slicer (Reichert-Jung ULTRACUT) and sliced with a copper mesh. The samples were incubated with 2% uranyl acetate and 2.6% lead citrate and then observed with a JEM-1200EX electron microscope (JEOL). JC-1 staining assay JC-1 staining was performed using an Enhanced Mitochondrial Membrane Potential Assay Kit (JC-1) (C2003S; Beyotime, China). In brief, the fluorescent probe JC-1 (200 ×) was diluted with JC-1 staining buffer into JC-1 (1 ×) working solution, and the mycelia of P. nicotianae were collected under normal conditions and methyl ferulate treatment of 0.5 mL JC-1 (1 ×) solution, and incubated for 20 min at 37 °C in the dark. After the samples were washed at least twice with JC-1 staining buffer, the fluorescence signal was detected with confocal microscopy (FV1000, OLYMPUS). Generation of transgenic tobacco plants expressing NtCOMT10 The NtCOMT sequences in tobacco genome were identified from the China Tobacco Genome Database ( http://218.28.140.17/ ). Multiple alignments and phylogenetic trees were constructed via the neighbor-joining method with 1000 replicates on each node via MEGA. The full-length coding region of NtCOMT10 was fused to the pCAMBia35S-EGFP vector. The construct was introduced into K326 via Agrobacterium tumefaciens - mediated transformation. The overexpression lines were further identified via qPCR with the specific primers presented in Supplemental Table 5. The plants were grown in a greenhouse with a 12-h-light/12-h-dark cycle at 26 °C, and 300 μmol m −2 s −1 lamps were used for supplemental light. Measurement of the methyl ferulate content in root exudates The root exudates of the tobacco plants were collected via the root dipping method as described previously . Briefly, the root exudates of the wild-type or overexpression lines were collected when the plants had eight true leaves. The seedlings were dug, and the attached soil was washed away. The roots were immersed in 100 mL distilled water and cultured under normal conditions for 12 h. The water containing exudates was filtered with 0.22-μm filters to remove root debris and microorganisms. Each group contained three replicates, and each replicate comprised 10 seedlings. The root exudates were extracted in ethyl acetate buffer (1:1, v/v) and washed three times with methanol. Methyl ferulate was measured using a Waters Acquity ultrahigh-pressure liquid chromatography (UHPLC) I-Class System coupled with a Thermo tandem quadrupole mass spectrometer (MS) with an electrospray ionization (ESI) interface. 16S rRNA sequencing and data processing Rhizosphere soil samples, including those from the methyl ferulate treatment group and the control group, were inoculated with P. nicotianae for 14 days in the disease nursery field, and twelve replicates for each group were collected for further 16S rRNA sequencing. The rhizosphere soil samples of four groups ( NtCOMT10 overexpression lines OE-7 and OE-18, wild-type lines and methyl ferulate treatment (six replicates for each) under two conditions (inoculated with or without P. nicotianae for 5 days) in pots were collected. The soil DNA samples were extracted, sequenced, and analyzed by the Shanghai Majorbio Bio-Pharm Biotechnology Company. Briefly, soil DNA was isolated from 0.1 g of soil and used for 16S rRNA amplicon sequencing with primers for the V5-V7 regions 799-F (5'-AACMGGATTAGATACCCKG-3') and 1193-R (5'-ACGTCATCCCCACCTTCC-3'). The purified amplicon products were analyzed using the Illumina MiSeq PE300 platform (Majorbio Bio-Pharm Technology). Data were analyzed (including network construction, volcano plots and the Wilcoxon rank-sum test) using the online platform of the Majorbio Cloud Platform ( www.majorbio.com ) . Isolation of rhizospheric bacteria and evaluation of Priestia megaterium NM17 antagonistic activity against P. nicotianae Bacterial isolation and quantification were performed as described previously . Briefly, rhizosphere soil samples from the methyl ferulate treatment were resuspended in sterile distilled water by serial dilution. The diluted soil suspensions were spread on tryptic soy agar and nutrient agar media for bacterial isolation and were incubated at 28 °C for 2–4 days. Bacterial colonies were picked, purified three times, and stored at -80 °C with 15% (v/v) glycerol. Genomic DNA was extracted from the purified colonies, and full-length 16S rRNA genes were amplified using the universal bacterial primers 27F/1492R (Supplementary Table 4). The sequences were submitted to EzBioCloud ( www.ezbiocloud.net ) for bacterial identification (Supplementary Table 3). Purified colonies were screened for antagonistic activity against P. nicotianae. The Priestia megaterium NM17 strain was cultivated in nutrient broth (NB) medium overnight, centrifuged at 6000 × g, and then resuspended in sterile water. At the five-true-leaf stage, 18 individual Xiaohuangjin 1025 seedlings in pots were inoculated with the Priestia megaterium NM17 strain at a final concentration of 10 7 cells g −1 substrate per gram of soil via the root drenching method. Eighteen control Xiaohuangjin 1025 seedlings were treated with equal amounts of sterile water. Three days later, P. nicotianae was inoculated into the roots of the tobacco plants. All the plants were grown in a climate chamber at 28 °C and regularly watered with sterile water. The symptoms of tobacco blank shank disease were recorded daily, as described previously. Tobacco cultivars (Gexin 3, Xiaohuangjin 1025 and K326) were obtained from the National Medium-term GenBank of Tobacco and planted in a field in Jimo, Shandong Province, China. In the greenhouse experiment, Gexin 3, Xiaohuangjin 1025, NtCOMT10 overexpression lines, and the wild-type (K326) were cultured in 10 cm × 10 cm pots until five true leaves stage (26 ℃, 70% relative humidity, 300 μmol m −2 s −1 , 12 h light/12 h dark). In the field experiments, plants were planted and cultivated using a routine method until the mature stage (described previously in ) in a nursery field of tobacco black shank disease located at Jimo Tobacco Resources and Experimental Station of the Chinese Academy of Agricultural Sciences. The plants were subsequently inoculated with millet containing P. nicotianae . The soil for the pot experiments was collected in early June from a healthy tobacco field at Jimo Tobacco Resources and the Environmental Station of the Chinese Academy of Agricultural Sciences (36° 27′ N, 120° 35′ E), Qingdao, China. The soil was collected from a depth of 5–20 cm. All the experimental soils were sieved (< 4 mm) and homogenized thoroughly before use. The soils were stored in a cool, ventilated, dry place for 7 d at 25 °C before being used in the experiments. P. nicotianae race 0 strain JM01 was obtained from the Integrated Pest Management Key Laboratory of China Tobacco, Qingdao, China. P. nicotianae JM01 was cultured in oatmeal agar (OA) medium at 26 °C. Gexin 3 and Xiaohuangjin 1025 were used in root exudate metabolome analysis, and each comprised three biological replicates (each replicate with 10 seedlings). The seedlings were inoculated with or without P. nicotianae at the stage of five true leaves. P. nicotianae for inoculation was prepared via culture on a medium prepared from millet. The millet seeds were boiled until half of the seed coat ruptured, and then the medium was filtered and sterilized for 20 min at 121 °C. After cooling, a piece of P. nicotianae was transferred to millet medium, cultured at 26 °C for 14 d. Each seedling was inoculated with 2 g of millet containing P. nicotianae per 200 g soil in each pot. After 5 d, the root exudates of these seedlings were collected. The seedlings were dug, and the attached soil was washed away. The roots were immersed in 100 mL of distilled water and cultured under normal conditions for 12 h. The root exudate samples were collected, frozen in liquid nitrogen, and stored at -80 °C. Sample preparation, extract analysis, metabolite identification, and quantification were performed at Wuhan MetWare Biotechnology Co., Ltd. ( www.metware.cn ) following standard procedures . Approximately 9 mL of freeze-dried root exudate powder was placed in 0.3 mL of 70% aqueous methanol at 4 °C overnight. The extract was centrifuged at 10,000 × g for 10 min and the supernatant was filtered through a 0.22-μm microporous membrane for further LC‒MS/MS analysis. Metabolites with VIP ≥ 1 and fold change ≥ 2 or ≤ 0.5 were considered as differential metabolites. Seven compounds selected from different root exudate metabolites were added to sterilized OA medium with a final concentration of 100 μg/mL for analysis of their inhibitory activities against P. nicotianae . The compounds were methyl ferulate (CAS 2309–07-1; powder; MACKLIN, China), ferulate (CAS 1135–24-6; powder; MACKLIN, China), adipic acid (CAS 124–04-9; powder; MACKLIN, China), punicic acid (CAS 544–72-9; liquid; MACKLIN, China), malic acid (CAS 6915–15-7; powder; MACKLIN, China), chlorogenic acid (CAS 327–97-9; powder; MACKLIN, China), and kinic acid (CAS 77–95-2; powder; MACKLIN, China). A diameter of 6 mm P. nicotianae disk was inoculated in OA medium, and cultured at 28 °C for three days. The diameters of the P. nicotianae mycelium ( n = 3) were measured, and the inhibition rates of these compounds were calculated. The inhibitory activity methyl ferulate against eight plant pathogenic fungi ( P. nicotianae , B. cinerea , R. zeae , A.alternate , C. lagenarium , F. oxysporum, P. sojae ) was assayed as described previously. In brief, 0.6 cm diameter mycelium disk of phytopathogens was inoculated in the center of OA medium with a methyl ferulate final concentration of 70 μg/mL, respectively. The inhibition rate of methyl ferulate against phytopathogens was calculated by measuring the diameter of the mycelium ( n = 3), which was cultured at 28 °C for three days. The antifungal activities of the isolated Bacillus strains were determined as described previously . Briefly, the isolated Bacillus strains were cultured in liquid nutrient broth (NB) medium for 12 h at 28 °C. The disk of P. nicotianae was placed in the center of the OA medium and 5 μL of Bacillus strains was inoculated on both sides of the disk ( n = 3 for each strain). After culture at 28 °C for 3 days, the diameter of each colony was measured, and the growth inhibition rate of mycelia was calculated as follows: colony diameter (cm) = colony measured diameter—cluster diameter (0.6 cm); mycelial inhibition  = (normal colony diameter—treated colony diameter)/normal colony diameter × 100. 50 value of methyl ferulate against P. nicotianae The EC 50 value of methyl ferulate against P. nicotianae was calculated from mycelial growth inhibition, which was determined on the basis of a linear regression between the concentration and percentage inhibition . The inhibitory activity of methyl ferulate against P. nicotianae was determined via a previously described method . Methyl ferulate was dissolved in DMSO and added to sterilized OA medium at 0, 20, 40, 60, 80, or 100 μg/mL. A 0.6 cm P. nicotianae colony was placed in the center of the OA culture for 3 days at 28 °C, and the colony diameter and mycelial inhibition were calculated as follows: colony diameter (cm) = colony measured diameter—cluster diameter (0.6 cm); mycelial inhibition  = (normal colony diameter—treated colony diameter) / normal colony diameter × 100. Each experiment included three replicates. The symptoms of tobacco blank shank disease in the methyl ferulate treatment lines or overexpression plants were compared with those of the wild-type, as described previously . Briefly, seeds of Xiaohuangjin 1025 tobacco were germinated and transplanted into pots until the five-leaf stage. Each seedling was inoculated with 2 g of millet containing P. nicotianae per 200 g of soils in each pot. The seedlings were treated with 70 μg/mL methyl ferulate for 20 mL per pot or with DMSO (each treatment was repeated 3 times with each replicate containing 12 tobacco plants), and the disease index was monitored daily for at least 10 d. Plant disease rate  = (number of infected plants/total number of plants surveyed) × 100%; plant disease index = [∑ (number of diseased plants at each level × number of corresponding levels) / (total number of plants surveyed × highest level value)] × 100. Soil or tobacco rhizosphere soil was collected after inoculation with P. nicotianae for 5 days, total DNA was extracted, and total DNA was purified from the soil samples via a soil genomic DNA extraction kit (TIANGEN, China). A qPCR assay with P. nicotianae- specific primers (Supplementary Table 5) was performed, and the density of P. nicotianae was quantified. Standard curves were generated using tenfold serial dilutions of a plasmid containing the P. nicotianae 18S rRNA gene. Each assay was performed in triplicate. via RNA-seq and validation of DEG using qPCR RNA samples were extracted using a total RNA isolation kit (Vazyme, RC411-01) and sequenced using the NovaSeq X Plus platform (Majorbio Bio-Pharm Biotechnology Co. Ltd., Shanghai, China). The data were analyzed using the online platform of the Majorbio Cloud Platform (www.majorbio.com) . The raw sequencing reads were filtered using FASTP and mapped to Phytophthora nicotianae (GCA_001483015.1) using HISAT2 (version 2.0.5) with the default parameters. DEGs (defined by expression change fold ≥ 2 or ≤ 0.5 with P < 0.05) were analyzed using DESeq2 (version 1.20.0). The expression levels of Ndufs7 were determined using qPCR, and the primers used for qPCR are listed in Supplementary Table 5. The MDC staining assay was performed as described previously . In brief, the fluorescent probe MDC (10,121–91-2, MedChemExpress) was diluted with PBS to a concentration of 100 μM. The mycelia of P. nicotianae were collected under normal conditions and under conditions of methyl ferulate treatment in MDC solution and incubated for 15 min at 25 °C in the dark. After the samples were washed at least three times in PBS, the fluorescence signal was detected using confocal microscopy (FV1000, OLYMPUS). The ATP content of P. nicotianae was determined using an ATP content assay kit (BC0300; Solarbio, China). Briefly, 0.1 g of P. nicotianae mycelium was harvested and extracted with 1 mL of extraction solution buffer. The supernatant (0.5 mL) was centrifuged at 8,000 × g for 10 min at 4 °C and mixed with chloroform (0.5 mL). The mixture was centrifuged at 8,000 × g at 4 °C for 3 min, and 0.1 mL of the supernatant was transferred to a working solution for further absorbance measurements. Mitochondrial complex I/NADH-CoQ reductase activity was determined using a mitochondrial respiratory chain complex I/NADH-CoQ reductase activity assay kit (BC0510; Solarbio, China). Briefly, 0.1 g of mycelium sample was suspended in 1 mL of extraction buffer 1. The supernatant was centrifuged at 600 × g for 10 min at 4 °C and again at 11,000 × g for 15 min. Extraction buffers 1 and 2 were added to the precipitate for ultrasonic crushing. The obtained mixture was subjected to further measurements. + /NADH content assay The NAD + /NADH content of P. nicotianae was determined using an NAD + /NADH quantification assay kit (S0176S; Beyotime, China). Briefly, a 0.1 g mycelium sample was extracted directly into 1 mL extraction buffer. The supernatant (10 μL) was centrifuged at 3,000 × g for 10 min and diluted to 50 μL. After mixing with 100 μL of detection antibody-HRP, the mixture was incubated at 37 °C for 60 min, and the absorbance was detected at 450 nm. The cell ultrastructure of P. nicotianae was observed via transmission electron microscopy (TEM). In brief, P. nicotianae was cultured on medium supplemented with or without 70 μg/mL methyl ferulate at 28 °C for 3 days. The mycelia of P. nicotianae were harvested and fixed in sterilized 2.5% (v/v) glutaraldehyde solution for 4 h, then 1% (v/v) osmium acid for 2 h. The samples were subsequently dehydrated in 50%-70%-90%-100% acetone twice for 15 min respectively. After polymerizing with Epon812 embedding agent, the resin blocks were sliced at 70 nm by an ultrathin slicer (Reichert-Jung ULTRACUT) and sliced with a copper mesh. The samples were incubated with 2% uranyl acetate and 2.6% lead citrate and then observed with a JEM-1200EX electron microscope (JEOL). JC-1 staining was performed using an Enhanced Mitochondrial Membrane Potential Assay Kit (JC-1) (C2003S; Beyotime, China). In brief, the fluorescent probe JC-1 (200 ×) was diluted with JC-1 staining buffer into JC-1 (1 ×) working solution, and the mycelia of P. nicotianae were collected under normal conditions and methyl ferulate treatment of 0.5 mL JC-1 (1 ×) solution, and incubated for 20 min at 37 °C in the dark. After the samples were washed at least twice with JC-1 staining buffer, the fluorescence signal was detected with confocal microscopy (FV1000, OLYMPUS). The NtCOMT sequences in tobacco genome were identified from the China Tobacco Genome Database ( http://218.28.140.17/ ). Multiple alignments and phylogenetic trees were constructed via the neighbor-joining method with 1000 replicates on each node via MEGA. The full-length coding region of NtCOMT10 was fused to the pCAMBia35S-EGFP vector. The construct was introduced into K326 via Agrobacterium tumefaciens - mediated transformation. The overexpression lines were further identified via qPCR with the specific primers presented in Supplemental Table 5. The plants were grown in a greenhouse with a 12-h-light/12-h-dark cycle at 26 °C, and 300 μmol m −2 s −1 lamps were used for supplemental light. The root exudates of the tobacco plants were collected via the root dipping method as described previously . Briefly, the root exudates of the wild-type or overexpression lines were collected when the plants had eight true leaves. The seedlings were dug, and the attached soil was washed away. The roots were immersed in 100 mL distilled water and cultured under normal conditions for 12 h. The water containing exudates was filtered with 0.22-μm filters to remove root debris and microorganisms. Each group contained three replicates, and each replicate comprised 10 seedlings. The root exudates were extracted in ethyl acetate buffer (1:1, v/v) and washed three times with methanol. Methyl ferulate was measured using a Waters Acquity ultrahigh-pressure liquid chromatography (UHPLC) I-Class System coupled with a Thermo tandem quadrupole mass spectrometer (MS) with an electrospray ionization (ESI) interface. Rhizosphere soil samples, including those from the methyl ferulate treatment group and the control group, were inoculated with P. nicotianae for 14 days in the disease nursery field, and twelve replicates for each group were collected for further 16S rRNA sequencing. The rhizosphere soil samples of four groups ( NtCOMT10 overexpression lines OE-7 and OE-18, wild-type lines and methyl ferulate treatment (six replicates for each) under two conditions (inoculated with or without P. nicotianae for 5 days) in pots were collected. The soil DNA samples were extracted, sequenced, and analyzed by the Shanghai Majorbio Bio-Pharm Biotechnology Company. Briefly, soil DNA was isolated from 0.1 g of soil and used for 16S rRNA amplicon sequencing with primers for the V5-V7 regions 799-F (5'-AACMGGATTAGATACCCKG-3') and 1193-R (5'-ACGTCATCCCCACCTTCC-3'). The purified amplicon products were analyzed using the Illumina MiSeq PE300 platform (Majorbio Bio-Pharm Technology). Data were analyzed (including network construction, volcano plots and the Wilcoxon rank-sum test) using the online platform of the Majorbio Cloud Platform ( www.majorbio.com ) . Bacterial isolation and quantification were performed as described previously . Briefly, rhizosphere soil samples from the methyl ferulate treatment were resuspended in sterile distilled water by serial dilution. The diluted soil suspensions were spread on tryptic soy agar and nutrient agar media for bacterial isolation and were incubated at 28 °C for 2–4 days. Bacterial colonies were picked, purified three times, and stored at -80 °C with 15% (v/v) glycerol. Genomic DNA was extracted from the purified colonies, and full-length 16S rRNA genes were amplified using the universal bacterial primers 27F/1492R (Supplementary Table 4). The sequences were submitted to EzBioCloud ( www.ezbiocloud.net ) for bacterial identification (Supplementary Table 3). Purified colonies were screened for antagonistic activity against P. nicotianae. The Priestia megaterium NM17 strain was cultivated in nutrient broth (NB) medium overnight, centrifuged at 6000 × g, and then resuspended in sterile water. At the five-true-leaf stage, 18 individual Xiaohuangjin 1025 seedlings in pots were inoculated with the Priestia megaterium NM17 strain at a final concentration of 10 7 cells g −1 substrate per gram of soil via the root drenching method. Eighteen control Xiaohuangjin 1025 seedlings were treated with equal amounts of sterile water. Three days later, P. nicotianae was inoculated into the roots of the tobacco plants. All the plants were grown in a climate chamber at 28 °C and regularly watered with sterile water. The symptoms of tobacco blank shank disease were recorded daily, as described previously. Supplementary Material 1: Figure S1. The inhibitory effect of methyl ferulate on fungi. Figure S2. The sequence alignment of OTU5403 (field samples), OTU9124 (pot samples) and the NM17 strain. Figure S3. Gene analysis of NtCOMT10 . A. Phylogenetic relationships, motif patterns, and gene structures of tobacco COMTs . B. Heatmap analysis of the expression levels of COMTs in resistant and susceptible tobacco cultivars. 0 h, before being inoculated with P. nicotianae , 60 h, after inoculation with P. nicotianae . C. The expression levels of NtCOMT10 in overexpression lines. Figure S4. Shannon’s diversity of wild-type, wild-type with methyl ferulate treatment, and overexpression lines with or without inoculated P. nicotianae. . Figure S5. PCoA on OTU levels of wild-type and overexpression lines with or without inoculated P. nicotianae Supplementary Material 2: Supplemental Table 1. Special root exudate metabolites in resistant cultivars. Supplementary Material 3: Supplemental Table 2. The inhibition of seven compounds from root exudates. Supplementary Material 4: Supplemental Table 3. Correlations and topological properties of wild-type (I-CK) and wild-type with methyl ferulate treatment (I-MF) rhizosphere bacterial microbiomes after inoculated P. nicotianae for10 days in the field. Supplementary Material 5: Supplemental Table 4. The antagonistic effects on P. nicotianae of 13 Bacillus strains. Supplementary Material 6: Supplemental Table 5. The list of primers used in this study. Supplementary Material 7: Supplemental Data set 1. All sample data of root exudate metabolism.
Reducing anxiety and depression in infertility among Nigerian women: an exploratory psycho-educational intervention trial (RADIANT) study protocol
b5ce0380-eee3-41f9-b7dd-ba7a5d1cefc0
8356932
Patient Education as Topic[mh]
Introduction Infertility is defined as “a disease of the reproductive system defined by the failure to achieve a clinical pregnancy after 12 months or more of regular unprotected sexual intercourse” . From Demographic and Health Surveys (DHS) from developing countries, one in four couples is affected by infertility . High premium is placed on childbirth in Nigeria; all couples are traditionally expected to procreate. Nigeria's DHS does not report infertility prevalence, however, rural community surveys reported prevalence of 8.7% and 30.3% . Infertility constitutes 24-59% of gynecological consultations in Nigeria [ - ]. Infertile women have feelings of loss, grief, anger, sadness, shame, self-blame and lack of femininity , which predict actual psychopathology . Though a couple's problem, the woman bears the larger burden of the associated stigma, due to socio-cultural beliefs that reproduction is the woman´s primary responsibility, and a failure is her failing . Studies show that infertile women are more likely to be economically disadvantaged , suffer psychological distress , be socially stigmatized [ , , ] and suffer intimate partner violence , than fertile women. The resulting marital disharmony may lead to divorce, for which the woman is blamed . Infertile men may also have more anxiety and depression symptoms than fertile men , but these appear to be less than those suffered by their partners . Nigerian studies have suggested counselling and psychological interventions to ameliorate the documented psychosocial aspects of infertility [ , , ]. However, no interventional studies were found on the subject. Evidence from outside of sub-Saharan Africa suggests that psychosocial interventions for infertile couples are effective [ - ]. An innovative approach to intervention delivery is the use of eHealth programming, “the use of emerging interactive technologies [e.g. internet, CD-ROMs etc.] to enable health improvement and health care services” . These have the advantage of not requiring facilitators and being reproducible and consistent across time and location. This study proposes to identify psychosocial and emotional needs in infertile women, develop an interventional ehealth tool and determine its effectiveness in improving the psychological and emotional health of infertile women. Traditional one-on-one counselling as part of on-going infertility evaluation and care is often difficult to perform in the study environment, due to lack of skills in gynecologists and stigma attached to seeing a psychiatrist. Integrating mental health care into regular infertility clinics may also be difficult due to dearth of providers. The authors hope to employ eHealth to combine theoretic knowledge and coping skills with drama, which is a popular form of local entertainment. This is a concept coined as 'edutainment', “entertainment (as by games, films, or shows) that is designed to be educational” . If effective, this intervention does not require face-to-face counselling. Family and close friends can also watch and can be educated on supporting the infertile couple. The aims of the study are to: 1) develop content for a culturally-relevant and cost-effective psychoeducational intervention package aimed at reducing social and emotional problems among women with infertility in Ibadan, Nigeria; 2) translate the developed content into an audio-visual drama production as a user-friendly tool for the target audience; and 3) evaluate the effectiveness of this psychoeducational intervention for reducing symptoms of anxiety and depression among women with infertility in Ibadan, Nigeria. Methods A multi-method study design will be utilized and implemented in two phases; phase i: development and validation of audio-visual psychoeducational intervention material; phase ii: exploratory randomized controlled trial to test the effectiveness of the intervention. The development of psychoeducational material will be a multi-disciplinary process involving the following specialists - psychiatrist, gynecologist, public health physician and communication and language arts expert. Theatre arts' professionals and students will also be recruited subsequently. The two phases are described below. Phase I: development of psychoeducational intervention material: this will be conducted in the following steps: 1) Content development: a) literature review: a comprehensive review of the literature will be performed and a composite list of emergent themes that affect women with infertility globally and especially in the Nigerian socio-cultural setting, will be developed. This will be independently performed by two units of two members each, with subsequent harmonization of both lists by the research group through discussions and consensus; b) gender consideration: two focus group discussions (FGDs) and several key informant interviews (KIIs) of infertile males (or spouses of infertile women) will be organized to understand the perceptions and experiences of males with respect to issues of infertility. This is important, as only few studies conducted among men were found in literature , resulting in little being known about male experience. The investigators hope to gain a holistic understanding from both male and female perspectives, which should strengthen the content development process of the psycho-educational intervention; c) specialist input from gynecologists and psychiatrists: the gynecologists will provide a summary sheet about the causes, course, treatment options and outcomes of infertility globally and in the study environment. The psychiatrists will also provide summary points about coping skills and building resilience in the face of life challenges as an essential life skill; d) emergence of the psychoeducational content: the combination of information derived from the three sources above will then be distilled by two units working independently to develop the psychoeducational content. Both teams will then meet to harmonize and resolve areas of disagreement and to produce a final summative material that is arranged into themes. 2) Translation of the developed content into a storyline and script. The content as developed in step 1 will be transformed into a storyline with specific emphasis on effective communication and message impact for the target audience. This will be overseen by the communications and language arts expert. 3) Production of psychoeducational intervention video: the script will be acted by professional actors, as well as students of the Department of Theatre Arts of the University of Ibadan, to produce a short film which will illustrate and deliver the key messages derived from the content. 4) Pre-test and establishment of face validity: the final video produced to deliver the psychoeducational intervention will be pretested and screened to a selection of nurses and doctors (independent of the study team) and 6-10 women with infertility (drawn from a different facility, other than the proposed study sites). Their feedback and views will be sought thereafter for acceptability, critical feedback and comments. The production will be edited as appropriate, following this screening. Phase II: randomized controlled trial to test the effectiveness of the psychoeducational intervention Study design: a randomized controlled trial with an allocation ratio of 1: 1 will be utilized. Study setting: 1) University College Hospital (UCH), Ibadan, Nigeria: it serves as a referral tertiary center for Nigeria and sub-Saharan Africa. The study will be at the outpatient gynecology clinic of the hospital; 2) Adeoyo Maternity Hospital (AMH), Yemetu, Ibadan, Nigeria: this is a province-owned secondary health care facility. Study subjects: women attending gynecology clinics in UCH or AMH on account of infertility. Consenting participants will be randomized into the intervention and control arms of the study, respectively. Eligibility criteria: women aged 18 years and above, who have been trying to conceive for at least one year, and who do not have any children. Interventions: consecutive, eligible infertility clinic attendees will be invited to participate in the study and informed consent obtained. Sociodemographic data would be collected via a structured questionnaire. Baseline assessment on participants' psychological and emotional state will be collected via Hospital Anxiety and Depression Scale (HADS) . Thereafter, the participants will be allocated into study groups. Participants in the intervention arm will attend the intervention session on a specified date at the facility. The psychoeducational video will be viewed, and immediate feedback will be obtained. Participants in the control arm will also receive the usual treatment offered all attendees at the gynecology clinics - comprising health talks by public health nurses and explanations and counselling provided by the managing physicians. The study´s primary outcomes are depressive and anxiety symptoms. HADS is a self-assessment scale which has subscales that measure severity of anxiety and depression. It is validated for use in medical outpatient clinics. Participants are followed up at three and six weeks; HADS will be administered at both times. Participants will be counseled on the need for follow-up. Recruited respondents in both arms will receive phone call reminders ahead of scheduled clinic appointments to improve attendance and minimize loss to follow-up. Sample size estimation: the minimum sample size was determined using the formula for comparing two means i.e. n= 2 [ Z 1 − α / 2 + Z 1 − β ] 2 Δ 2 where Δ, effect size = Δ = μ 1 − μ 1 σ where Z1-α/2 [α] was set at 5% i.e. 1.96; Z1-β [β] was set at 20% i.e. power at 80% = 0.84 . A moderate effect size of 0.53 was used. A minimum sample size of 57 was calculated. Attrition rate of 20% was incorporated in the calculation and sample size was estimated at 68 women in each group. Randomization procedure: a randomization list of randomly permuted blocks of treatment assignments in blocks of 4 will be prepared by the study statistician, using a computer-generated table of random numbers. The research assistant will open the sealed envelope containing the assigned group after baseline HADS has been administered. The clinical staff recruiting participants and the data analyst will be blinded, as he will receive two spreadsheets of data without knowing which is the intervention arm. Data management and analysis: the data collected will entered into the computer and analysed using IBM SPSS version 23.0. Interim analysis will be carried out using Mann-Whitney-U test to compare independent samples of the intervention and control groups at 0, 3 and 6 weeks. Survival analysis will also be carried out at each time to assess number of cases of depression or anxiety that transited to non-cases ( ). A data monitoring committee is not necessary as there are no interventions bordering on patient safety. There are no issues that might necessitate termination of trial. Malfeasance or adverse events are not applicable in a trial that constitutes audiovisual expert education as the intervention. Ethical considerations: ethical approval was obtained from the University of Ibadan/University College Hospital Ethical Committee (Ref: UI/EC/17/0003). Protocol amendments will be communicated to the ethics committee and written approval obtained before commencing. Clinical care providers in the clinic will recruit the participants and obtain assent, then a trained research assistant will obtain informed consent ( Annex 1 ). Anonymized data will be obtained from participants in privacy and data collected will be kept in a secure locked cabinet in an office. Data entered on to the study computer will be password-protected. Personal information will only be recorded in order to send reminders and invitations for follow up sessions. The investigators will have access to the final data set and will not disclose information to other parties. Trial registration: the trial is registered with Pan African Clinical Trial Registry [PACTR201901892865101]. Reporting of trial results: will describe the demographics of the participants in the intervention and control groups, while will show the proportion of cases to non-cases. The changes in these over 6 weeks will be documented in and . Trial findings will be communicated in a written report to sponsors, presented at a gynecology conference and published in a per-reviewed journal. The authors will be those who have contributed to the conception, design, data acquisition and writing of the manuscript. The full protocol and dataset will be available by contacting the corresponding author. Infertility is defined as “a disease of the reproductive system defined by the failure to achieve a clinical pregnancy after 12 months or more of regular unprotected sexual intercourse” . From Demographic and Health Surveys (DHS) from developing countries, one in four couples is affected by infertility . High premium is placed on childbirth in Nigeria; all couples are traditionally expected to procreate. Nigeria's DHS does not report infertility prevalence, however, rural community surveys reported prevalence of 8.7% and 30.3% . Infertility constitutes 24-59% of gynecological consultations in Nigeria [ - ]. Infertile women have feelings of loss, grief, anger, sadness, shame, self-blame and lack of femininity , which predict actual psychopathology . Though a couple's problem, the woman bears the larger burden of the associated stigma, due to socio-cultural beliefs that reproduction is the woman´s primary responsibility, and a failure is her failing . Studies show that infertile women are more likely to be economically disadvantaged , suffer psychological distress , be socially stigmatized [ , , ] and suffer intimate partner violence , than fertile women. The resulting marital disharmony may lead to divorce, for which the woman is blamed . Infertile men may also have more anxiety and depression symptoms than fertile men , but these appear to be less than those suffered by their partners . Nigerian studies have suggested counselling and psychological interventions to ameliorate the documented psychosocial aspects of infertility [ , , ]. However, no interventional studies were found on the subject. Evidence from outside of sub-Saharan Africa suggests that psychosocial interventions for infertile couples are effective [ - ]. An innovative approach to intervention delivery is the use of eHealth programming, “the use of emerging interactive technologies [e.g. internet, CD-ROMs etc.] to enable health improvement and health care services” . These have the advantage of not requiring facilitators and being reproducible and consistent across time and location. This study proposes to identify psychosocial and emotional needs in infertile women, develop an interventional ehealth tool and determine its effectiveness in improving the psychological and emotional health of infertile women. Traditional one-on-one counselling as part of on-going infertility evaluation and care is often difficult to perform in the study environment, due to lack of skills in gynecologists and stigma attached to seeing a psychiatrist. Integrating mental health care into regular infertility clinics may also be difficult due to dearth of providers. The authors hope to employ eHealth to combine theoretic knowledge and coping skills with drama, which is a popular form of local entertainment. This is a concept coined as 'edutainment', “entertainment (as by games, films, or shows) that is designed to be educational” . If effective, this intervention does not require face-to-face counselling. Family and close friends can also watch and can be educated on supporting the infertile couple. The aims of the study are to: 1) develop content for a culturally-relevant and cost-effective psychoeducational intervention package aimed at reducing social and emotional problems among women with infertility in Ibadan, Nigeria; 2) translate the developed content into an audio-visual drama production as a user-friendly tool for the target audience; and 3) evaluate the effectiveness of this psychoeducational intervention for reducing symptoms of anxiety and depression among women with infertility in Ibadan, Nigeria. A multi-method study design will be utilized and implemented in two phases; phase i: development and validation of audio-visual psychoeducational intervention material; phase ii: exploratory randomized controlled trial to test the effectiveness of the intervention. The development of psychoeducational material will be a multi-disciplinary process involving the following specialists - psychiatrist, gynecologist, public health physician and communication and language arts expert. Theatre arts' professionals and students will also be recruited subsequently. The two phases are described below. Phase I: development of psychoeducational intervention material: this will be conducted in the following steps: 1) Content development: a) literature review: a comprehensive review of the literature will be performed and a composite list of emergent themes that affect women with infertility globally and especially in the Nigerian socio-cultural setting, will be developed. This will be independently performed by two units of two members each, with subsequent harmonization of both lists by the research group through discussions and consensus; b) gender consideration: two focus group discussions (FGDs) and several key informant interviews (KIIs) of infertile males (or spouses of infertile women) will be organized to understand the perceptions and experiences of males with respect to issues of infertility. This is important, as only few studies conducted among men were found in literature , resulting in little being known about male experience. The investigators hope to gain a holistic understanding from both male and female perspectives, which should strengthen the content development process of the psycho-educational intervention; c) specialist input from gynecologists and psychiatrists: the gynecologists will provide a summary sheet about the causes, course, treatment options and outcomes of infertility globally and in the study environment. The psychiatrists will also provide summary points about coping skills and building resilience in the face of life challenges as an essential life skill; d) emergence of the psychoeducational content: the combination of information derived from the three sources above will then be distilled by two units working independently to develop the psychoeducational content. Both teams will then meet to harmonize and resolve areas of disagreement and to produce a final summative material that is arranged into themes. 2) Translation of the developed content into a storyline and script. The content as developed in step 1 will be transformed into a storyline with specific emphasis on effective communication and message impact for the target audience. This will be overseen by the communications and language arts expert. 3) Production of psychoeducational intervention video: the script will be acted by professional actors, as well as students of the Department of Theatre Arts of the University of Ibadan, to produce a short film which will illustrate and deliver the key messages derived from the content. 4) Pre-test and establishment of face validity: the final video produced to deliver the psychoeducational intervention will be pretested and screened to a selection of nurses and doctors (independent of the study team) and 6-10 women with infertility (drawn from a different facility, other than the proposed study sites). Their feedback and views will be sought thereafter for acceptability, critical feedback and comments. The production will be edited as appropriate, following this screening. Study design: a randomized controlled trial with an allocation ratio of 1: 1 will be utilized. Study setting: 1) University College Hospital (UCH), Ibadan, Nigeria: it serves as a referral tertiary center for Nigeria and sub-Saharan Africa. The study will be at the outpatient gynecology clinic of the hospital; 2) Adeoyo Maternity Hospital (AMH), Yemetu, Ibadan, Nigeria: this is a province-owned secondary health care facility. Study subjects: women attending gynecology clinics in UCH or AMH on account of infertility. Consenting participants will be randomized into the intervention and control arms of the study, respectively. Eligibility criteria: women aged 18 years and above, who have been trying to conceive for at least one year, and who do not have any children. Interventions: consecutive, eligible infertility clinic attendees will be invited to participate in the study and informed consent obtained. Sociodemographic data would be collected via a structured questionnaire. Baseline assessment on participants' psychological and emotional state will be collected via Hospital Anxiety and Depression Scale (HADS) . Thereafter, the participants will be allocated into study groups. Participants in the intervention arm will attend the intervention session on a specified date at the facility. The psychoeducational video will be viewed, and immediate feedback will be obtained. Participants in the control arm will also receive the usual treatment offered all attendees at the gynecology clinics - comprising health talks by public health nurses and explanations and counselling provided by the managing physicians. The study´s primary outcomes are depressive and anxiety symptoms. HADS is a self-assessment scale which has subscales that measure severity of anxiety and depression. It is validated for use in medical outpatient clinics. Participants are followed up at three and six weeks; HADS will be administered at both times. Participants will be counseled on the need for follow-up. Recruited respondents in both arms will receive phone call reminders ahead of scheduled clinic appointments to improve attendance and minimize loss to follow-up. Sample size estimation: the minimum sample size was determined using the formula for comparing two means i.e. n= 2 [ Z 1 − α / 2 + Z 1 − β ] 2 Δ 2 where Δ, effect size = Δ = μ 1 − μ 1 σ where Z1-α/2 [α] was set at 5% i.e. 1.96; Z1-β [β] was set at 20% i.e. power at 80% = 0.84 . A moderate effect size of 0.53 was used. A minimum sample size of 57 was calculated. Attrition rate of 20% was incorporated in the calculation and sample size was estimated at 68 women in each group. Randomization procedure: a randomization list of randomly permuted blocks of treatment assignments in blocks of 4 will be prepared by the study statistician, using a computer-generated table of random numbers. The research assistant will open the sealed envelope containing the assigned group after baseline HADS has been administered. The clinical staff recruiting participants and the data analyst will be blinded, as he will receive two spreadsheets of data without knowing which is the intervention arm. Data management and analysis: the data collected will entered into the computer and analysed using IBM SPSS version 23.0. Interim analysis will be carried out using Mann-Whitney-U test to compare independent samples of the intervention and control groups at 0, 3 and 6 weeks. Survival analysis will also be carried out at each time to assess number of cases of depression or anxiety that transited to non-cases ( ). A data monitoring committee is not necessary as there are no interventions bordering on patient safety. There are no issues that might necessitate termination of trial. Malfeasance or adverse events are not applicable in a trial that constitutes audiovisual expert education as the intervention. Ethical considerations: ethical approval was obtained from the University of Ibadan/University College Hospital Ethical Committee (Ref: UI/EC/17/0003). Protocol amendments will be communicated to the ethics committee and written approval obtained before commencing. Clinical care providers in the clinic will recruit the participants and obtain assent, then a trained research assistant will obtain informed consent ( Annex 1 ). Anonymized data will be obtained from participants in privacy and data collected will be kept in a secure locked cabinet in an office. Data entered on to the study computer will be password-protected. Personal information will only be recorded in order to send reminders and invitations for follow up sessions. The investigators will have access to the final data set and will not disclose information to other parties. Trial registration: the trial is registered with Pan African Clinical Trial Registry [PACTR201901892865101]. Reporting of trial results: will describe the demographics of the participants in the intervention and control groups, while will show the proportion of cases to non-cases. The changes in these over 6 weeks will be documented in and . Trial findings will be communicated in a written report to sponsors, presented at a gynecology conference and published in a per-reviewed journal. The authors will be those who have contributed to the conception, design, data acquisition and writing of the manuscript. The full protocol and dataset will be available by contacting the corresponding author.
Genomic prediction for yield and malting traits in barley using metabolomic and near-infrared spectra
41d36688-dc27-422d-a8f3-5ec6aec4038f
11717810
Biochemistry[mh]
Barley ( Hordeum vulgare L.) is a widely cultivated cereal crop primarily utilized for animal feed and malting for alcoholic beverage production (Miralles et al. ; Verma et al. ). Over the last decades, a growing demand for malting barley has increased the necessity for commercial varieties that integrate high productivity and superior malting quality. Grain yield (GY) and grain protein content (PC) are two of the most important traits in barley production. Developing varieties with high grain yield potential and protein content in the range of 9.0–11.5% are central goals in barley breeding programs (Bertholdsson ; Emebiri ; Barmeier et al. ). These traits can only be assessed late in the breeding process when there are enough seeds for replicated field trials. Malting quality refers to the capacity to undergo a successful malting process, which ultimately influences the flavor, aroma, and overall quality of the resulting beverage. Key traits influencing malting quality include malt extract yield ≥ 80%, malt protein between 9.5 and 12.5%, β-glucan from < 0.2 to 1.5%, soluble protein between 4 and 6%, diastatic power between 70 and120°L, wort viscosity between 1.5 and 5.0 cP, and low enzyme concentration (α-amylase, β-amylase, limit dextrinase, and β-glucosidase), among other traits (Li et al. ; Guo et al. ; Sarup et al. ). The assessment of malting quality is a complex and expensive process as it depends on numerous interrelated traits that collectively contribute to producing high-quality malt. Therefore, this assessment is usually not available for all breeding lines and replicate samples. Most importantly, just as the phenotypes for grain yield and protein content, reliable information of malting quality is not available in the early stages of the breeding cycle where selection is most intense. Breeding for the improvement of yield and malting quality traits in barley is challenged by assessments of these traits only being available late in the breeding process. Genomic selection (GS, Meuwissen et al. ) based on whole-genome prediction (WGP) is a cost-effective methodology that can enhance accuracy in the early stages of selection, and it has been successfully utilized in plant and animal breeding to improve traits of economic importance (Crossa et al. ; Hayes and Goddard ; Raffo and Jensen ). To further extend and improve genomic selection breeding programs, there has been an increasing interest in exploiting omics technologies (Fakrudin et al. ; Chaudhary et al. ). The omics approaches generate a large quantity of data that can be seen as intermediate phenotypes (i.e., endophenotypes) between the DNA action and the final phenotype. Some examples of the utilization of omics in the context of genetic analysis and breeding can be found for metabolomics (Riedelsheimer et al. ; Hayes et al. ; Guo et al. , ), transcriptomics (Guo et al. ; Delrot et al. ; Morgante et al. ), and proteomics (Zhu et al. ). In addition, other techniques measuring chemometric traits, such as near-infrared spectroscopy (NIRS), have been proposed (Hayes et al. ; Rincent et al. ; Robert et al. ). Similarly to genomics, different omics features, and near-infrared (NIR) wavelengths can be utilized to predict the phenotype and genetic values (Riedelsheimer et al. ; Hayes et al. ; Rincent et al. ; Christensen et al. ; Derbyshire et al. ; Robert et al. ). In this article, we focus on the utilization of metabolomics and NIR data originating from nuclear magnetic resonance (NMR) spectroscopy and NIRS, respectively. The NMR spectroscopy is a powerful analytical technique that produces a high-dimensional set of signal intensities that can be associated with specific metabolites (Gunther et al. ); the signal intensities will be referred to as metabolomic intensities (MIs) hereinafter. The metabolomics data has been successfully utilized for the prediction of complex traits in maize (Riedelsheimer et al. ), rice (Xu et al. ), wheat (Hayes et al. ), barley (Guo et al. ), and other plant and animal species (reviewed by Fernandez et al. ; Scossa et al. , and Sakurai 2022). The NIRS is a low-cost, non-destructive technique that quantifies absorbance/reflectance of biological samples at a broad range of wavelengths in the visible and NIR spectrum. The NIR wavelengths are routinely used in cereal breeding programs to predict water and protein content (Dowell et al. ; Osborne ) and are often available for any other purpose without additional costs. Recently, Rincent et al. proposed an alternative called phenomic selection, where NIR wavelengths are used for prediction of phenotypes. Phenomic selection has been successfully used to predict complex traits in wheat (Rincent et al. ; Cuevas et al. ; Krause et al. ; Robert et al. ), maize (Lane et al. ), rye (Galán et al. ), triticale (Zhu et al. ), Soybean (Parmley et al. ; Zhu et al. ), and poplar (Rincent et al. ). Different methods have been proposed to incorporate omics or NIRS data in statistical genetic models. The MIs and NIR wavelengths can be included as regressors in genomic-like omics-based (GLOB) prediction models (Robert et al. ), where all variables can be directly incorporated as separated random effects or via similarity matrices (Riedelsheimer et al. ; Guo et al. ; Rincent et al. ; Schrag et al. ; Brault et al. ). However, while several approaches have provided insights into the relevance of MIs or NIR wavelengths on the trait, they do not generate predictions of omics- or NIR-based genetic effects that can be directly used for breeding purposes. For this purpose, Christensen et al. proposed to use a joint model that generates genomic estimated breeding values (GEBVs) as a combination of estimates of direct genomic effects and omics-mediated genomic effects (GOBLUP). This model has recently been implemented using genomics and metabolomics for barley (Guo et al. ) and microbiome data in sheep (Boggio et al. ), and the GOBLUP model can also be potentially useful to exploit NIR data. In this study, we used a spring barley breeding population phenotyped for grain yield (GY), grain protein content (PC), and five malting quality traits: malt extract yield (EY), wort viscosity (WV), wort color (WC), filtering speed (FS), and β-glucan content (BG), and with MIs originating from NMR on leaf tissue and NIR wavelengths originating from whole grain after malting from the same experimental plots. We had three specific objectives: (i) To investigate the genetic variation and heritability of MIs and NIR wavelengths. (ii) To investigate variance components (VCs) and heritabilities for genetic models including genomic and metabolomics (GOBLUP-MI) or genomic and NIR wavelengths (GOBLUP-NIR) for all the available traits. (iii) To evaluate the performance of the developed models (GBLUP, GOBLUP-MI and GOBLUP-NIR) for prediction of breeding values for the traits included. The accuracies of predicted breeding values were evaluated using fivefold and leave-one-breeding-cycle-out (LBCO) cross-validation (CV) schemes, and results from both models were compared with a baseline genomic model (GBLUP). Experimental data The plant material consisted of 639 sixth-generation (F 6 ) spring barley ( Hordeum vulgare L.) lines tested in 2,250 individual plots by the breeding company Nordic Seed A/S. The descriptive statistics for GY, PC, and the MQ traits WV, BG, EY, FS, and WC are presented in Table . The breeding lines came from two breeding cycles tested in years 2021 to 2022 in two locations in Denmark (DK): Odder (Central DK), Holeby (South-East DK) and Skive (North-West DK). A breeding cycle is defined as all crosses made within a single calendar year. The breeding lines within each year-location combination were arranged in field trials (i.e. experimental blocks) following a randomized incomplete block design. The field trials were divided into smaller plots of size 8.25 m 2 (5.5 × 1.5 m), where in each plot a barley breeding line or a control line was sown. Two control barley lines were sown with three replications in each trial. The grain yield (kg/8.25 m 2 ) and protein content estimated by NIR spectra on raw grain using a PerkinElmer DA 7440 On-line NIR instrument were recorded on each plot. Grain samples from each plot were collected and processed in micro-malting batches to obtain several malting quality traits: malt extract yield , wort viscosity (mPa-s), wort color (European Brewery Convention units), filtering speed (cm/20 min), and β-glucan (mg/L). A detailed description of the methodology utilized to obtain malting quality traits can be found in Sarup et al. . The DNA extraction was performed using a modified CTAB method (Rogers and Bendich, 1985). The plant material was genotyped using an Illumina iSelect9K barley chip. A total of 8,198 single-nucleotide polymorphism (SNP) markers were utilized. Quality control was done by removing SNPs with minor allele frequency (MAF) lower than 5% and call rate lower than 0.90. Genotypes were coded 0,1,2, counting the number of alleles of the reference allele for each locus. Missing genotypes were ~ 0.3% and were assigned two times the observed allele frequency (i.e., mean dosage). For each plot, the metabolomic information was obtained from 10 cuts of green flag leaf tips randomly distributed in the yield plot just after flag leaf appearance. All samples from the same location, year, and trial were collected in Eppendorph tubes on the same date within 3 h. The tubes were stored on dry ice in the field and subsequently frozen at -20 °C. The tissue samples were freeze dried and thereafter pulverized using a TissueLyser II (Qiagen®), after which 1.0 ml 50% methanol was added to the tubes. The samples were incubated in a Thermo shaker (TS-DW, Biosan) at 50 °C for 10 min and cooled to room temperature. After 5 min at 4000G in a centrifuge (4-5C, Sigma), 0.70 ml of supernatant was transferred to 2 ml Eppendorph tubes and frozen at − 20 °C until shipment to the NMR laboratory in one batch per year (3 months after harvest). The samples were shipped on dry ice and stored at − 80 °C upon arrival to the Swedish NMR center at the University of Gothenburg, Sweden. For NMR analysis, samples were put in CentriVap lyophilizer to dry for 2 h, setting at 20 °C. Following, 60 µl methanol-d4 was added to each tube and dried again for one hour. Then 600 µl PREC buffer was added to the samples (37.5 mM NaPi pD 6.95, 0.05% NaN3, 99.8% D2O, 0.747 mM TSP-d4). The samples were shaken at 800 rpm, 25 °C for 45 min and transferred to 5 mm SampleJet rack tubes. Acquisition was performed as IVDr 32-scan 1D 1H NOESY. In total, 30,468 metabolomic intensities (MIs) were recorded from one-dimensional (1D) 1H nuclear magnetic resonance (NMR) spectroscopy. The signal intensities were integrated over small chemical shift intervals along the spectra expressed in parts per million ranging from 0.70 to 9.00 ppm. For a full description of the procedure to prepare NMR and obtain MIs, see Guo et al. . The NIR data was obtained from whole grain after malted. A total of 141 NIR wavelengths were recorded for absorbance from 950 to 1650 nm with a step of 5 nm. The resulting wavelengths were treated according to Rincent et al. as follows: i) the NIR wavelengths were normalized (centered to zero and scaled to variance one), and ii) the first derivative was computed using a Savitzky-Golay filter (Savitzky and Golay ) implemented in the R package signal (Signal Developers 2014). The Savitzky-Golay first derivative is a preprocessing step used to reduce random noise produced by instrumental fluctuations and environmental interference, and for feature enhancement, preserving the important spectral features and amplifying changes in absorbance. See supplementary material 1 for an illustration of NIR wavelengths before and after Savitzky-Golay transformation. In the end, all 2,250 plots were characterized using MIs and NIR wavelengths. Estimation of heritability of MIs and NIR wavelengths Two univariate models were utilized to estimate variance components (VCs) and narrow-sense heritabilities ( [12pt]{minimal} $${h}^{2}$$ h 2 ) for MIs (Model-MI) and NIR wavelengths (Model-NIR). The Model 1 was defined as: 1 [12pt]{minimal} $$y_{j} = Xb_{j} + Z_{g} g_{j} + Z_{l} l_{j} + Z_{{i_{g} }} i_{gj} + Z_{{i_{l} }} i_{lj} + Z_{s} s_{j} + e_{j}$$ y j = X b j + Z g g j + Z l l j + Z i g i gj + Z i l i lj + Z s s j + e j where [12pt]{minimal} $${{}}_{j}$$ y j is the vector of phenotypes for MIs ( [12pt]{minimal} $$j=1, , $$ j = 1 , ⋯ , 30,468 ); [12pt]{minimal} $${}$$ X is the design matrix for the fixed effects; [12pt]{minimal} $${{}}_{j}$$ b j is the vector of fixed effects (year-location-trial); [12pt]{minimal} $${{}}_{j}$$ g j is the vector of genomic breeding values of the lines with [12pt]{minimal} $${{}}_{j} N(0,{}{ }_{{g}_{j}}^{2})$$ g j ∼ N ( 0 , G σ g j 2 ) , where [12pt]{minimal} $${ }_{{g}_{j}}^{2}$$ σ g j 2 is the additive genomic variance and [12pt]{minimal} $${}$$ G is the genomic relationship matrix (VanRaden ): [12pt]{minimal} $${}=}{{}}^{}}}{2 {p}_{i}(1-{p}_{i})}$$ G = Q Q ′ 2 ∑ p i ( 1 - p i ) , with [12pt]{minimal} $${}$$ Q being the genotypic matrix centered by two times the observed allele frequencies of the reference alleles, and [12pt]{minimal} $${p}_{i}$$ p i is the allele frequency for the [12pt]{minimal} $${i}^{th} SNP$$ i th S N P ; [12pt]{minimal} $${{}}_{j}$$ l j is the vector of genetic line effects, which includes non-additive genetic effects, such as epistasis, and additive effects not explained by marker genotypes, with [12pt]{minimal} $${{}}_{j} N(0,{}{ }_{{l}_{j}}^{2})$$ l j ∼ N ( 0 , I σ l j 2 ) , where [12pt]{minimal} $${ }_{{l}_{j}}^{2}$$ σ l j 2 is the variance of line effects; [12pt]{minimal} $${{}}_{gj}$$ i gj and [12pt]{minimal} $${{}}_{lj}$$ i lj are vectors of genotype-by-environment interactions with the environment defined as the year-location combination, where [12pt]{minimal} $${{}}_{gj} N(0,[}& 0\\ 0& {}0& 0\\ 0& 0\\ 0& 0\\ 0& 0}& 0\\ 0& {}]{ }_{{i}_{gj}}^{2})$$ i gj ∼ N ( 0 , G 0 0 G 0 0 0 0 0 0 0 0 G 0 0 G σ i gj 2 ) and [12pt]{minimal} $${{}}_{lj} N(0, {}{ }_{lj}^{2})$$ i lj ∼ N ( 0 , I σ lj 2 ) , with [12pt]{minimal} $${ }_{gj}^{2}$$ σ gj 2 being the additive genomic-by-environment interaction variance, and [12pt]{minimal} $${ }_{lj}^{2}$$ σ lj 2 being the genetic line-by-environment interaction variance due to additive genetic effects not accounted for by marker genotypes and non-additive genetic effects; [12pt]{minimal} $${{}}_{j}$$ s j is the vector of spatial effects, with [12pt]{minimal} $${{}}_{j} N(0,{}{ }_{{s}_{j}}^{2})$$ s j ∼ N ( 0 , S σ s j 2 ) , where [12pt]{minimal} $${}$$ S is a spatial similarity matrix computed as: [12pt]{minimal} $${}=}^{}}}{tr(}^{}})/n}$$ S = W W ′ t r W W ′ / n , where [12pt]{minimal} $$$$ W is an indicator matrix relating the position of the target plot and their eight surrounding plots (neighboring plots) with the observations, [12pt]{minimal} $$tr$$ tr is the trace (sum of diagonal elements) and [12pt]{minimal} $$n$$ n the total number of rows, and [12pt]{minimal} $${ }_{{s}_{j}}^{2}$$ σ s j 2 is the variance of the [12pt]{minimal} $${{}}_{j}$$ s j effect (see Raffo et al. and Tessema et al. for a detailed description of this spatial effect), note that the degree of similarity in [12pt]{minimal} $${}$$ S depend on the distance between plots; [12pt]{minimal} $${{}}_{g}$$ Z g , [12pt]{minimal} $${{}}_{l}$$ Z l , [12pt]{minimal} $${{}}_{{i}_{g}}$$ Z i g , [12pt]{minimal} $${{}}_{{i}_{l}}$$ Z i l , and [12pt]{minimal} $${{}}_{s}$$ Z s are the design matrices for [12pt]{minimal} $${{}}_{j}$$ g j , [12pt]{minimal} $${{}}_{j}$$ l j , [12pt]{minimal} $${{}}_{gj}$$ i gj , [12pt]{minimal} $${{}}_{lj}$$ i lj , and [12pt]{minimal} $${{}}_{j}$$ s j , respectively; [12pt]{minimal} $${{}}_{j}$$ e j is a vector of random residual effect with [12pt]{minimal} $${{}}_{j} N(0,{}{ }_{{e}_{j}}^{2})$$ e j ∼ N ( 0 , I σ e j 2 ) , where [12pt]{minimal} $${ }_{{e}_{j}}^{2}$$ σ e j 2 is the residual variance. Model 2 was defined for NIR wavelengths ( [12pt]{minimal} $$j=141$$ j = 141 ) with the same effects as in Model 1 plus an additional random effect [12pt]{minimal} $${{}}_{m}{{}}_{j}$$ Z m m j , corresponding to the batch in which the samples were malted, where [12pt]{minimal} $${{}}_{j}$$ m j is the vector of malting batch effects with [12pt]{minimal} $${{}}_{j} N(0,{}{ }_{{m}_{j}}^{2})$$ m j ∼ N ( 0 , I σ m j 2 ) , and [12pt]{minimal} $${ }_{{m}_{j}}^{2}$$ σ m j 2 is the variance of the malting batch effects; [12pt]{minimal} $${{}}_{m}$$ Z m is the design matrix for [12pt]{minimal} $${{}}_{j}$$ m j . Note that Models 1 and 2 are adapted versions of GBLUP models (Habier et al. ; VanRaden ). VCs were estimated by Restricted Maximum Likelihood (REML) using the Average Information (AI-REML) module in DMU (Madsen and Jensen, 2013). The heritability at the level of single-plot measurement was estimated for each MI and NIR wavelength as [12pt]{minimal} $${}_{j}^{2}=d(){ }_{{g}_{j}}^{2}/{}_{{{}}_{j}}^{2}$$ h ^ j 2 = d G σ ^ g j 2 / σ ^ P j 2 , where [12pt]{minimal} $$d()$$ d G is the average of diagonal elements of the genomic relationship matrix [12pt]{minimal} $$d()=1.83$$ d G = 1.83 , [12pt]{minimal} $${}_{{g}_{j}}^{2}$$ σ ^ g j 2 is the estimated additive genomic variance, and [12pt]{minimal} $${}_{{P}_{j}}^{2}$$ σ ^ P j 2 is the estimated phenotypic variance for each MI or NMR wavelength. The [12pt]{minimal} $${}_{{P}_{j}}^{2}$$ σ ^ P j 2 for Model 1 was estimated as: [12pt]{minimal} $${}_{{P}_{j}}^{2}=$$ σ ^ P j 2 = [12pt]{minimal} $$d(){ }_{{g}_{j}}^{2}+{}_{{l}_{j}}^{2}+d(){ }_{{i}_{gj}}^{2}+{}_{{i}_{lj}}^{2}+(d()-mean())* {}_{{s}_{j}}^{2}+{}_{{e}_{j}}^{2}$$ d G σ ^ g j 2 + σ ^ l j 2 + d G σ ^ i gj 2 + σ ^ i lj 2 + ( d S - m e a n S ) ∗ σ ^ s j 2 + σ ^ e j 2 , where [12pt]{minimal} $${}_{{l}_{j}}^{2}$$ σ ^ l j 2 , [12pt]{minimal} $${}_{{i}_{gj}}^{2}$$ σ ^ i gj 2 , [12pt]{minimal} $${}_{{i}_{lj}}^{2}$$ σ ^ i lj 2 , [12pt]{minimal} $${}_{{s}_{j}}^{2}$$ σ ^ s j 2 , and [12pt]{minimal} $${}_{{e}_{j}}^{2}$$ σ ^ e j 2 are the estimated parameters, [12pt]{minimal} $$d()$$ d G is 1, and [12pt]{minimal} $$mean()$$ m e a n S is 0.004. The [12pt]{minimal} $${}_{{P}_{j}}^{2}$$ σ ^ P j 2 for Model 2 was estimated using the same formula as Model 1 plus the variance of the malting batch effects [12pt]{minimal} $${}_{{m}_{j}}^{2}$$ σ ^ m j 2 . In addition, a significance test for [12pt]{minimal} $${}_{j}^{2}$$ h ^ j 2 equal to zero was performed using a simulation procedure to determine the null distribution of [12pt]{minimal} $${}_{j}^{2}$$ h ^ j 2 in our specific statistical design. This was done following the procedure proposed by Guo et al. ; see description in their article for further details. Genomic and multi-omics models for yield and malting traits GBLUP The GBLUP model (Habier et al. ; VanRaden ) was utilized for the traits GY, PC, EY, WV, WC, FS, and BG. For analyses of GY and PC, GBLUP was defined with the same model effects as described in the previous subsection for Model 1. For analyses of MQ traits EY, WV, WC, FS, and BG, GBLUP was defined as previously described for Model 2. GOBLUP-MI The GOBLUP model was proposed by Christensen et al. to integrate different omics data into genetic evaluations. In brief, the MBLUP is a joint model assuming that the phenotype of interest is affected by different omics expression levels (in addition to genomic effects), and where GEBVs can be obtained from a combination of genomic and omics information. See Christensen et al. for a complete theoretical description of the model and derivation of GEBVs. In our study, GOBLUP-MI refers to the model including MIs and GOBLUP-NIR to the model including NIR wavelengths. The model of Christensen et al. (GOBLUP-MI) for GY and PC was implemented using the following two steps: step 1: 2 [12pt]{minimal} $$y = Xb_{1} + u + Z_{g} g_{1} + Z_{l} l_{1} + Z_{{i_{g} }} i_{g1} + Z_{{i_{l} }} i_{l1} + Z_{s} s_{1} + e_{1}$$ y = X b 1 + u + Z g g 1 + Z l l 1 + Z i g i g 1 + Z i l i l 1 + Z s s 1 + e 1 step 2: 3 [12pt]{minimal} $$ = Xb_{2} + Z_{g} g_{2} + Z_{l} l_{2} + Z_{{i_{g} }} i_{g2} + Z_{{i_{l} }} i_{l2} + Z_{s} s_{2} + e_{2}$$ u ^ = X b 2 + Z g g 2 + Z l l 2 + Z i g i g 2 + Z i l i l 2 + Z s s 2 + e 2 where [12pt]{minimal} $${}$$ y is the vector of phenotypes, [12pt]{minimal} $${}$$ X , [12pt]{minimal} $${}$$ b , [12pt]{minimal} $${{}}_{g}$$ Z g , [12pt]{minimal} $${{}}_{l}$$ Z l , [12pt]{minimal} $${{}}_{{i}_{g}}$$ Z i g , [12pt]{minimal} $${{}}_{{i}_{l}}$$ Z i l , [12pt]{minimal} $${{}}_{s}$$ Z s , [12pt]{minimal} $${}$$ g , [12pt]{minimal} $${}$$ l , [12pt]{minimal} $${{}}_{g}$$ i g , [12pt]{minimal} $${{}}_{l}$$ i l , [12pt]{minimal} $${}$$ s , [12pt]{minimal} $${}$$ e are defined as for GBLUP, regardless of whether it is denoted with a subscript 1 or 2; [12pt]{minimal} $${}$$ u is the vector of MIs effects on phenotype with [12pt]{minimal} $${} N(0,{{}}_{MI}{ }_{{}}^{2})$$ u ∼ N ( 0 , Q MI σ u 2 ) , where [12pt]{minimal} $${{}}_{MI}$$ Q MI is the metabolomic similarity matrix computed as [12pt]{minimal} $${{}}_{MI}=}}_{MI}{{}}_{MI}^{}}}{q}$$ Q MI = M MI M MI ′ q with [12pt]{minimal} $${{}}_{MI}$$ M MI a 2,250 (number of observations) × 30,468 (number of MIs) matrix of centered and scaled MIs, and [12pt]{minimal} $${ }_{{}}^{2}$$ σ u 2 the metabolomic variance. In the step 2 (Eq. ), [12pt]{minimal} $$}}$$ u ^ is the vector of predicted metabolomic effect from step 1 (Eq. ). The vector of GEBVs in GOBLUP-MI are therefore computed as the vector of GEBV in step 1 ( [12pt]{minimal} $${}}}_{1}$$ g ^ 1 ) plus the vector of GEBVs in step 2 ( [12pt]{minimal} $${}}}_{2}$$ g ^ 2 ). The GOBLUP-MI for MQ traits (EY, WV, WC, FS, and BG) was defined with the same effects as for GY and PC plus an additional random effect [12pt]{minimal} $${{}}_{m}{{}}_{1}$$ Z m m 1 (for step 1) and [12pt]{minimal} $${{}}_{m}{{}}_{2}$$ Z m m 2 (for step 2), corresponding to the mashing batch group in which samples were malted, where [12pt]{minimal} $${{}}_{m}$$ Z m and [12pt]{minimal} $${}$$ m were defined as for GBLUP regardless of subscript 1 or 2. GOBLUP-NIR The GOBLUP-NIR was developed to include NIR wavelengths instead of MIs. This model had the same effects as GOBLUP-MI for all traits except for the metabolomic effects ( [12pt]{minimal} $${}$$ u ), which was replaced by a new effect for NIR wavelengths. For step 1 of GOBLUP-NIR, the NIR effect was defined as [12pt]{minimal} $${}$$ v (equivalent to [12pt]{minimal} $${}$$ u in GOBLUP-MI), with [12pt]{minimal} $${}$$ v as the vector of NIR wavelength effects on phenotype, where [12pt]{minimal} $${} N(0,{{}}_{NIR}{ }_{{}}^{2})$$ v ∼ N ( 0 , Q NIR σ v 2 ) . To build the NIR relationship matrix [12pt]{minimal} $${{}}_{NIR}$$ Q NIR , principal component analysis (PCA) was performed over the 141 centered and scaled NIR wavelengths for the complete population. The first eight principal components explained more than 99% of the variation and were utilized to compute [12pt]{minimal} $${{}}_{NIR}=}}_{NIR}{{}}_{NIR}^{}}}{t}$$ Q NIR = M NIR M NIR ′ t , where [12pt]{minimal} $${{}}_{NIR}$$ M NIR is a 2,250 (number of observations) × 8 (number of selected principal components) matrix, and [12pt]{minimal} $${ }_{{}}^{2}$$ σ v 2 the estimated NIR wavelength variance. The principal components were used since it improved convergence of the REML algorithm used for VCs estimation compared to when NIR wavelengths were used directly. For step 2 of GOBLUP-NIR, the NIR wavelengths estimated effects were defined as [12pt]{minimal} $$}}$$ v ^ (equivalent to [12pt]{minimal} $$}}$$ u ^ in GOBLUP-MI). The GOBLUP-NIR was used for all traits except PC, as PC is directly predicted from NIRS. Note that in both models, GOBLUP-MI and GOBLUP-NIR, independence and equal heritabilities of all omics features are assumed. Variance and heritability estimation for GBLUP and GOBLUP for yield and malting quality traits The VCs estimation was performed using the AI-REML module in the DMU software (Madsen and Jensen, 2013). For the GBLUP models, [12pt]{minimal} $${}^{2}$$ h ^ 2 was computed using the same formulas as described for Models 1 and 2 in the section "Estimation of heritability of MIs and NIR wavelengths" ; note that all these models have similar effects, but are used for different phenotypes (i.e., MIs, NIR wavelengths, GY, PC, or MQ traits). The GOBLUP-MI allowed us to obtain different heritabilities than GBLUP. According to Christensen et al. the heritability in the GOBLUP can be defined as [12pt]{minimal} $${h}^{2}={h}_{d}^{2}+ {c}_{m}^{2}* {h}_{M}^{2}$$ h 2 = h d 2 + c m 2 ∗ h M 2 ; where [12pt]{minimal} $${h}_{d}^{2}$$ h d 2 is the direct heritability obtained from step 1 of MGLUP. For GOBLUP-MI, [12pt]{minimal} $${h}_{d}^{2}$$ h d 2 is estimated as [12pt]{minimal} $${}_{d}^{2}=d(){ }_{{{}}_{1}}^{2}/{ }_{{{}}_{1}}^{2}$$ h ^ d 2 = d G σ ^ g 1 2 / σ ^ P 1 2 , with [12pt]{minimal} $$d()$$ d G and [12pt]{minimal} $${}_{{{}}_{1}}^{2}$$ σ ^ g 1 2 as previously defined, and [12pt]{minimal} $${}_{{{}}_{1}}^{2}=d(){ }_{{{}}_{1}}^{2}+d({{}}_{MI}){ }_{{}}^{2}+ {}_{{{}}_{1}}^{2}+d(){ }_{{{}}_{{}1}}^{2}+{}_{{{}}_{{}1}}^{2}+{(d()-mean())*}_{{{}}_{1}}^{2}+{}_{{{}}_{1}}^{2}$$ σ ^ P 1 2 = d G σ ^ g 1 2 + d Q MI σ ^ u 2 + σ ^ l 1 2 + d G σ ^ i g 1 2 + σ ^ i l 1 2 + ( d S - m e a n S ) ∗ σ ^ s 1 2 + σ ^ e 1 2 ; [12pt]{minimal} $${c}_{m}^{2}$$ c m 2 is the metabolomics variance ratio, and is estimated as [12pt]{minimal} $${}_{m}^{2}=d({{}}_{MI}){ }_{{}}^{2}/{ }_{{{}}_{1}}^{2}$$ c ^ m 2 = d Q MI σ ^ u 2 / σ ^ P 1 2 ; and [12pt]{minimal} $${h}_{M}^{2}$$ h M 2 is the heritability of MIs, and is estimated as [12pt]{minimal} $${}_{M}^{2}=d(){ }_{{{}}_{2}}^{2}/{ }_{{{}}_{2}}^{2}$$ h ^ M 2 = d G σ ^ g 2 2 / σ ^ P 2 2 with [12pt]{minimal} $${}_{{{}}_{2}}^{2}=d(){ }_{{{}}_{2}}^{2}+{}_{{{}}_{2}}^{2}+d(){ }_{{{}}_{{}2}}^{2}+{}_{{{}}_{{}2}}^{2}+{(d()-mean())*}_{{{}}_{2}}^{2}+{}_{{{}}_{2}}^{2}.$$ σ ^ P 2 2 = d G σ ^ g 2 2 + σ ^ l 2 2 + d G σ ^ i g 2 2 + σ ^ i g 2 2 + ( d S - m e a n S ) ∗ σ ^ s 2 2 + σ ^ e 2 2 . For GOBLUP-NIR, parameters were calculated using the same formulas as for GOBLUP-MI, but replacing [12pt]{minimal} $${{}}_{MI}$$ Q MI with [12pt]{minimal} $${{}}_{NIR}$$ Q NIR for computing [12pt]{minimal} $${}_{{{}}_{1}}^{2}$$ σ ^ P 1 2 and [12pt]{minimal} $${}_{m}^{2}$$ c ^ m 2 (NIR wavelength variance ratio in GOBLUP-NIR); the [12pt]{minimal} $${h}_{M}^{2}$$ h M 2 in GOBLUP-NIR represents the heritability of NIR wavelengths. Cross-validation and model validation Predictions of GEBVs from GBLUP, GOBLUP-MI and GOBLUP-NIR were assessed using fivefold and leave-one-breeding-cycle-out (LBCO) CVs schemes. The fivefold CV consisted of randomly masking the phenotypes of all replicates of 20% of the lines and using the remaining lines to predict the additive genetic values. This process was repeated five times until all lines were assigned to one of the five folds and predicted. The fivefold CV is useful for predictions with high genetic relationship between reference population (RP) and validation population (VP) as relatives such as parents, siblings, and half-siblings can be shared between RP and VP. Thus, the fivefold allows us to investigate the performance of the genetic models in a scenario where a new variety is predicted via genotype but no phenotypic records is available. The LBCO CV was performed by masking the phenotypes of one breeding cycle in the VP and using the remaining phenotyped lines to predict the masked lines. This process was repeated twice to predict the breeding cycles evaluated in 2021 and 2022. The LBCO CV allowed us to evaluate the prediction problem where newly developed lines are predicted from parental and historical records. For GOBLUP-MI and GOBLUP-NIR both CVs were performed by masking both phenotypes and MIs (or NIR) information in the VP. The reason for masking MIs (or NIR) in VP is that environmental correlations may influence predictions if MIs (or NIR) wavelengths are not masked, and this would represent a similar scenario to the situation of a bivariate model with records for the secondary trait in VP (see Guo et al. for details). The models' predictive ability (PA) was evaluated as the correlation between GEBVs and phenotypes corrected by fixed effects ( [12pt]{minimal} $${{}}_{c}$$ y c ). In addition, the regression coefficient of GEBVs obtained with whole phenotypic information on GEBVs obtained with partial phenotypic information was used as an estimate of variance inflation: [12pt]{minimal} $${{}}_{w,p}=}_{w}, {}_{p})}{var({}_{p})}$$ b w , p = c o v ( g ^ w , g ^ p ) v a r ( g ^ p ) (Legarra and Reverter ). The standard errors for PA and [12pt]{minimal} $${{}}_{w,p}$$ b w , p were obtained using an ordinary non-parametric bootstrapping with replacement, full sample size, and 10,000 replications. The PA between models for each trait was contrasted using a two-tailed paired t -test (critical P-value = 0.01). Ratios of accuracies Different scenarios comparing ratios of population accuracies of predicted breeding values were assessed according to Lagarra and Reverter . The ratios of accuracies are a measure of the increase in accuracy when including additional information to the models and can be estimated as the correlation between breeding values estimated with whole ( [12pt]{minimal} $${}}}_{w}$$ g ^ w ) and partial information ( [12pt]{minimal} $${}}}_{p}$$ g ^ p ); the lower the correlation, the larger the increase in accuracy by adding new information. Different ratios of accuracies were computed and named according to the different information in VP (Fig. ). The ratios of accuracies computed were: i) GBLUPg/gp, representing the correlation of GEBV with genomic information in VP and GEBV with genomic and phenotypic information in VP, ii) GOBLUPg/gm as the correlation of GEBV with genomic in VP and GEBV with genomic and omics information in VP, iii) GOBLUPgm/gmp as the correlation of GEBV with genomic and omics in VP and GEBV with genomic, omics, and phenotypic information in VP, and iv) GOBLUPg/gmp as the correlation of GEBV with genomic in VP and GEBV with genomic, omics, and phenotypic information in VP. The plant material consisted of 639 sixth-generation (F 6 ) spring barley ( Hordeum vulgare L.) lines tested in 2,250 individual plots by the breeding company Nordic Seed A/S. The descriptive statistics for GY, PC, and the MQ traits WV, BG, EY, FS, and WC are presented in Table . The breeding lines came from two breeding cycles tested in years 2021 to 2022 in two locations in Denmark (DK): Odder (Central DK), Holeby (South-East DK) and Skive (North-West DK). A breeding cycle is defined as all crosses made within a single calendar year. The breeding lines within each year-location combination were arranged in field trials (i.e. experimental blocks) following a randomized incomplete block design. The field trials were divided into smaller plots of size 8.25 m 2 (5.5 × 1.5 m), where in each plot a barley breeding line or a control line was sown. Two control barley lines were sown with three replications in each trial. The grain yield (kg/8.25 m 2 ) and protein content estimated by NIR spectra on raw grain using a PerkinElmer DA 7440 On-line NIR instrument were recorded on each plot. Grain samples from each plot were collected and processed in micro-malting batches to obtain several malting quality traits: malt extract yield , wort viscosity (mPa-s), wort color (European Brewery Convention units), filtering speed (cm/20 min), and β-glucan (mg/L). A detailed description of the methodology utilized to obtain malting quality traits can be found in Sarup et al. . The DNA extraction was performed using a modified CTAB method (Rogers and Bendich, 1985). The plant material was genotyped using an Illumina iSelect9K barley chip. A total of 8,198 single-nucleotide polymorphism (SNP) markers were utilized. Quality control was done by removing SNPs with minor allele frequency (MAF) lower than 5% and call rate lower than 0.90. Genotypes were coded 0,1,2, counting the number of alleles of the reference allele for each locus. Missing genotypes were ~ 0.3% and were assigned two times the observed allele frequency (i.e., mean dosage). For each plot, the metabolomic information was obtained from 10 cuts of green flag leaf tips randomly distributed in the yield plot just after flag leaf appearance. All samples from the same location, year, and trial were collected in Eppendorph tubes on the same date within 3 h. The tubes were stored on dry ice in the field and subsequently frozen at -20 °C. The tissue samples were freeze dried and thereafter pulverized using a TissueLyser II (Qiagen®), after which 1.0 ml 50% methanol was added to the tubes. The samples were incubated in a Thermo shaker (TS-DW, Biosan) at 50 °C for 10 min and cooled to room temperature. After 5 min at 4000G in a centrifuge (4-5C, Sigma), 0.70 ml of supernatant was transferred to 2 ml Eppendorph tubes and frozen at − 20 °C until shipment to the NMR laboratory in one batch per year (3 months after harvest). The samples were shipped on dry ice and stored at − 80 °C upon arrival to the Swedish NMR center at the University of Gothenburg, Sweden. For NMR analysis, samples were put in CentriVap lyophilizer to dry for 2 h, setting at 20 °C. Following, 60 µl methanol-d4 was added to each tube and dried again for one hour. Then 600 µl PREC buffer was added to the samples (37.5 mM NaPi pD 6.95, 0.05% NaN3, 99.8% D2O, 0.747 mM TSP-d4). The samples were shaken at 800 rpm, 25 °C for 45 min and transferred to 5 mm SampleJet rack tubes. Acquisition was performed as IVDr 32-scan 1D 1H NOESY. In total, 30,468 metabolomic intensities (MIs) were recorded from one-dimensional (1D) 1H nuclear magnetic resonance (NMR) spectroscopy. The signal intensities were integrated over small chemical shift intervals along the spectra expressed in parts per million ranging from 0.70 to 9.00 ppm. For a full description of the procedure to prepare NMR and obtain MIs, see Guo et al. . The NIR data was obtained from whole grain after malted. A total of 141 NIR wavelengths were recorded for absorbance from 950 to 1650 nm with a step of 5 nm. The resulting wavelengths were treated according to Rincent et al. as follows: i) the NIR wavelengths were normalized (centered to zero and scaled to variance one), and ii) the first derivative was computed using a Savitzky-Golay filter (Savitzky and Golay ) implemented in the R package signal (Signal Developers 2014). The Savitzky-Golay first derivative is a preprocessing step used to reduce random noise produced by instrumental fluctuations and environmental interference, and for feature enhancement, preserving the important spectral features and amplifying changes in absorbance. See supplementary material 1 for an illustration of NIR wavelengths before and after Savitzky-Golay transformation. In the end, all 2,250 plots were characterized using MIs and NIR wavelengths. Two univariate models were utilized to estimate variance components (VCs) and narrow-sense heritabilities ( [12pt]{minimal} $${h}^{2}$$ h 2 ) for MIs (Model-MI) and NIR wavelengths (Model-NIR). The Model 1 was defined as: 1 [12pt]{minimal} $$y_{j} = Xb_{j} + Z_{g} g_{j} + Z_{l} l_{j} + Z_{{i_{g} }} i_{gj} + Z_{{i_{l} }} i_{lj} + Z_{s} s_{j} + e_{j}$$ y j = X b j + Z g g j + Z l l j + Z i g i gj + Z i l i lj + Z s s j + e j where [12pt]{minimal} $${{}}_{j}$$ y j is the vector of phenotypes for MIs ( [12pt]{minimal} $$j=1, , $$ j = 1 , ⋯ , 30,468 ); [12pt]{minimal} $${}$$ X is the design matrix for the fixed effects; [12pt]{minimal} $${{}}_{j}$$ b j is the vector of fixed effects (year-location-trial); [12pt]{minimal} $${{}}_{j}$$ g j is the vector of genomic breeding values of the lines with [12pt]{minimal} $${{}}_{j} N(0,{}{ }_{{g}_{j}}^{2})$$ g j ∼ N ( 0 , G σ g j 2 ) , where [12pt]{minimal} $${ }_{{g}_{j}}^{2}$$ σ g j 2 is the additive genomic variance and [12pt]{minimal} $${}$$ G is the genomic relationship matrix (VanRaden ): [12pt]{minimal} $${}=}{{}}^{}}}{2 {p}_{i}(1-{p}_{i})}$$ G = Q Q ′ 2 ∑ p i ( 1 - p i ) , with [12pt]{minimal} $${}$$ Q being the genotypic matrix centered by two times the observed allele frequencies of the reference alleles, and [12pt]{minimal} $${p}_{i}$$ p i is the allele frequency for the [12pt]{minimal} $${i}^{th} SNP$$ i th S N P ; [12pt]{minimal} $${{}}_{j}$$ l j is the vector of genetic line effects, which includes non-additive genetic effects, such as epistasis, and additive effects not explained by marker genotypes, with [12pt]{minimal} $${{}}_{j} N(0,{}{ }_{{l}_{j}}^{2})$$ l j ∼ N ( 0 , I σ l j 2 ) , where [12pt]{minimal} $${ }_{{l}_{j}}^{2}$$ σ l j 2 is the variance of line effects; [12pt]{minimal} $${{}}_{gj}$$ i gj and [12pt]{minimal} $${{}}_{lj}$$ i lj are vectors of genotype-by-environment interactions with the environment defined as the year-location combination, where [12pt]{minimal} $${{}}_{gj} N(0,[}& 0\\ 0& {}0& 0\\ 0& 0\\ 0& 0\\ 0& 0}& 0\\ 0& {}]{ }_{{i}_{gj}}^{2})$$ i gj ∼ N ( 0 , G 0 0 G 0 0 0 0 0 0 0 0 G 0 0 G σ i gj 2 ) and [12pt]{minimal} $${{}}_{lj} N(0, {}{ }_{lj}^{2})$$ i lj ∼ N ( 0 , I σ lj 2 ) , with [12pt]{minimal} $${ }_{gj}^{2}$$ σ gj 2 being the additive genomic-by-environment interaction variance, and [12pt]{minimal} $${ }_{lj}^{2}$$ σ lj 2 being the genetic line-by-environment interaction variance due to additive genetic effects not accounted for by marker genotypes and non-additive genetic effects; [12pt]{minimal} $${{}}_{j}$$ s j is the vector of spatial effects, with [12pt]{minimal} $${{}}_{j} N(0,{}{ }_{{s}_{j}}^{2})$$ s j ∼ N ( 0 , S σ s j 2 ) , where [12pt]{minimal} $${}$$ S is a spatial similarity matrix computed as: [12pt]{minimal} $${}=}^{}}}{tr(}^{}})/n}$$ S = W W ′ t r W W ′ / n , where [12pt]{minimal} $$$$ W is an indicator matrix relating the position of the target plot and their eight surrounding plots (neighboring plots) with the observations, [12pt]{minimal} $$tr$$ tr is the trace (sum of diagonal elements) and [12pt]{minimal} $$n$$ n the total number of rows, and [12pt]{minimal} $${ }_{{s}_{j}}^{2}$$ σ s j 2 is the variance of the [12pt]{minimal} $${{}}_{j}$$ s j effect (see Raffo et al. and Tessema et al. for a detailed description of this spatial effect), note that the degree of similarity in [12pt]{minimal} $${}$$ S depend on the distance between plots; [12pt]{minimal} $${{}}_{g}$$ Z g , [12pt]{minimal} $${{}}_{l}$$ Z l , [12pt]{minimal} $${{}}_{{i}_{g}}$$ Z i g , [12pt]{minimal} $${{}}_{{i}_{l}}$$ Z i l , and [12pt]{minimal} $${{}}_{s}$$ Z s are the design matrices for [12pt]{minimal} $${{}}_{j}$$ g j , [12pt]{minimal} $${{}}_{j}$$ l j , [12pt]{minimal} $${{}}_{gj}$$ i gj , [12pt]{minimal} $${{}}_{lj}$$ i lj , and [12pt]{minimal} $${{}}_{j}$$ s j , respectively; [12pt]{minimal} $${{}}_{j}$$ e j is a vector of random residual effect with [12pt]{minimal} $${{}}_{j} N(0,{}{ }_{{e}_{j}}^{2})$$ e j ∼ N ( 0 , I σ e j 2 ) , where [12pt]{minimal} $${ }_{{e}_{j}}^{2}$$ σ e j 2 is the residual variance. Model 2 was defined for NIR wavelengths ( [12pt]{minimal} $$j=141$$ j = 141 ) with the same effects as in Model 1 plus an additional random effect [12pt]{minimal} $${{}}_{m}{{}}_{j}$$ Z m m j , corresponding to the batch in which the samples were malted, where [12pt]{minimal} $${{}}_{j}$$ m j is the vector of malting batch effects with [12pt]{minimal} $${{}}_{j} N(0,{}{ }_{{m}_{j}}^{2})$$ m j ∼ N ( 0 , I σ m j 2 ) , and [12pt]{minimal} $${ }_{{m}_{j}}^{2}$$ σ m j 2 is the variance of the malting batch effects; [12pt]{minimal} $${{}}_{m}$$ Z m is the design matrix for [12pt]{minimal} $${{}}_{j}$$ m j . Note that Models 1 and 2 are adapted versions of GBLUP models (Habier et al. ; VanRaden ). VCs were estimated by Restricted Maximum Likelihood (REML) using the Average Information (AI-REML) module in DMU (Madsen and Jensen, 2013). The heritability at the level of single-plot measurement was estimated for each MI and NIR wavelength as [12pt]{minimal} $${}_{j}^{2}=d(){ }_{{g}_{j}}^{2}/{}_{{{}}_{j}}^{2}$$ h ^ j 2 = d G σ ^ g j 2 / σ ^ P j 2 , where [12pt]{minimal} $$d()$$ d G is the average of diagonal elements of the genomic relationship matrix [12pt]{minimal} $$d()=1.83$$ d G = 1.83 , [12pt]{minimal} $${}_{{g}_{j}}^{2}$$ σ ^ g j 2 is the estimated additive genomic variance, and [12pt]{minimal} $${}_{{P}_{j}}^{2}$$ σ ^ P j 2 is the estimated phenotypic variance for each MI or NMR wavelength. The [12pt]{minimal} $${}_{{P}_{j}}^{2}$$ σ ^ P j 2 for Model 1 was estimated as: [12pt]{minimal} $${}_{{P}_{j}}^{2}=$$ σ ^ P j 2 = [12pt]{minimal} $$d(){ }_{{g}_{j}}^{2}+{}_{{l}_{j}}^{2}+d(){ }_{{i}_{gj}}^{2}+{}_{{i}_{lj}}^{2}+(d()-mean())* {}_{{s}_{j}}^{2}+{}_{{e}_{j}}^{2}$$ d G σ ^ g j 2 + σ ^ l j 2 + d G σ ^ i gj 2 + σ ^ i lj 2 + ( d S - m e a n S ) ∗ σ ^ s j 2 + σ ^ e j 2 , where [12pt]{minimal} $${}_{{l}_{j}}^{2}$$ σ ^ l j 2 , [12pt]{minimal} $${}_{{i}_{gj}}^{2}$$ σ ^ i gj 2 , [12pt]{minimal} $${}_{{i}_{lj}}^{2}$$ σ ^ i lj 2 , [12pt]{minimal} $${}_{{s}_{j}}^{2}$$ σ ^ s j 2 , and [12pt]{minimal} $${}_{{e}_{j}}^{2}$$ σ ^ e j 2 are the estimated parameters, [12pt]{minimal} $$d()$$ d G is 1, and [12pt]{minimal} $$mean()$$ m e a n S is 0.004. The [12pt]{minimal} $${}_{{P}_{j}}^{2}$$ σ ^ P j 2 for Model 2 was estimated using the same formula as Model 1 plus the variance of the malting batch effects [12pt]{minimal} $${}_{{m}_{j}}^{2}$$ σ ^ m j 2 . In addition, a significance test for [12pt]{minimal} $${}_{j}^{2}$$ h ^ j 2 equal to zero was performed using a simulation procedure to determine the null distribution of [12pt]{minimal} $${}_{j}^{2}$$ h ^ j 2 in our specific statistical design. This was done following the procedure proposed by Guo et al. ; see description in their article for further details. GBLUP The GBLUP model (Habier et al. ; VanRaden ) was utilized for the traits GY, PC, EY, WV, WC, FS, and BG. For analyses of GY and PC, GBLUP was defined with the same model effects as described in the previous subsection for Model 1. For analyses of MQ traits EY, WV, WC, FS, and BG, GBLUP was defined as previously described for Model 2. GOBLUP-MI The GOBLUP model was proposed by Christensen et al. to integrate different omics data into genetic evaluations. In brief, the MBLUP is a joint model assuming that the phenotype of interest is affected by different omics expression levels (in addition to genomic effects), and where GEBVs can be obtained from a combination of genomic and omics information. See Christensen et al. for a complete theoretical description of the model and derivation of GEBVs. In our study, GOBLUP-MI refers to the model including MIs and GOBLUP-NIR to the model including NIR wavelengths. The model of Christensen et al. (GOBLUP-MI) for GY and PC was implemented using the following two steps: step 1: 2 [12pt]{minimal} $$y = Xb_{1} + u + Z_{g} g_{1} + Z_{l} l_{1} + Z_{{i_{g} }} i_{g1} + Z_{{i_{l} }} i_{l1} + Z_{s} s_{1} + e_{1}$$ y = X b 1 + u + Z g g 1 + Z l l 1 + Z i g i g 1 + Z i l i l 1 + Z s s 1 + e 1 step 2: 3 [12pt]{minimal} $$ = Xb_{2} + Z_{g} g_{2} + Z_{l} l_{2} + Z_{{i_{g} }} i_{g2} + Z_{{i_{l} }} i_{l2} + Z_{s} s_{2} + e_{2}$$ u ^ = X b 2 + Z g g 2 + Z l l 2 + Z i g i g 2 + Z i l i l 2 + Z s s 2 + e 2 where [12pt]{minimal} $${}$$ y is the vector of phenotypes, [12pt]{minimal} $${}$$ X , [12pt]{minimal} $${}$$ b , [12pt]{minimal} $${{}}_{g}$$ Z g , [12pt]{minimal} $${{}}_{l}$$ Z l , [12pt]{minimal} $${{}}_{{i}_{g}}$$ Z i g , [12pt]{minimal} $${{}}_{{i}_{l}}$$ Z i l , [12pt]{minimal} $${{}}_{s}$$ Z s , [12pt]{minimal} $${}$$ g , [12pt]{minimal} $${}$$ l , [12pt]{minimal} $${{}}_{g}$$ i g , [12pt]{minimal} $${{}}_{l}$$ i l , [12pt]{minimal} $${}$$ s , [12pt]{minimal} $${}$$ e are defined as for GBLUP, regardless of whether it is denoted with a subscript 1 or 2; [12pt]{minimal} $${}$$ u is the vector of MIs effects on phenotype with [12pt]{minimal} $${} N(0,{{}}_{MI}{ }_{{}}^{2})$$ u ∼ N ( 0 , Q MI σ u 2 ) , where [12pt]{minimal} $${{}}_{MI}$$ Q MI is the metabolomic similarity matrix computed as [12pt]{minimal} $${{}}_{MI}=}}_{MI}{{}}_{MI}^{}}}{q}$$ Q MI = M MI M MI ′ q with [12pt]{minimal} $${{}}_{MI}$$ M MI a 2,250 (number of observations) × 30,468 (number of MIs) matrix of centered and scaled MIs, and [12pt]{minimal} $${ }_{{}}^{2}$$ σ u 2 the metabolomic variance. In the step 2 (Eq. ), [12pt]{minimal} $$}}$$ u ^ is the vector of predicted metabolomic effect from step 1 (Eq. ). The vector of GEBVs in GOBLUP-MI are therefore computed as the vector of GEBV in step 1 ( [12pt]{minimal} $${}}}_{1}$$ g ^ 1 ) plus the vector of GEBVs in step 2 ( [12pt]{minimal} $${}}}_{2}$$ g ^ 2 ). The GOBLUP-MI for MQ traits (EY, WV, WC, FS, and BG) was defined with the same effects as for GY and PC plus an additional random effect [12pt]{minimal} $${{}}_{m}{{}}_{1}$$ Z m m 1 (for step 1) and [12pt]{minimal} $${{}}_{m}{{}}_{2}$$ Z m m 2 (for step 2), corresponding to the mashing batch group in which samples were malted, where [12pt]{minimal} $${{}}_{m}$$ Z m and [12pt]{minimal} $${}$$ m were defined as for GBLUP regardless of subscript 1 or 2. GOBLUP-NIR The GOBLUP-NIR was developed to include NIR wavelengths instead of MIs. This model had the same effects as GOBLUP-MI for all traits except for the metabolomic effects ( [12pt]{minimal} $${}$$ u ), which was replaced by a new effect for NIR wavelengths. For step 1 of GOBLUP-NIR, the NIR effect was defined as [12pt]{minimal} $${}$$ v (equivalent to [12pt]{minimal} $${}$$ u in GOBLUP-MI), with [12pt]{minimal} $${}$$ v as the vector of NIR wavelength effects on phenotype, where [12pt]{minimal} $${} N(0,{{}}_{NIR}{ }_{{}}^{2})$$ v ∼ N ( 0 , Q NIR σ v 2 ) . To build the NIR relationship matrix [12pt]{minimal} $${{}}_{NIR}$$ Q NIR , principal component analysis (PCA) was performed over the 141 centered and scaled NIR wavelengths for the complete population. The first eight principal components explained more than 99% of the variation and were utilized to compute [12pt]{minimal} $${{}}_{NIR}=}}_{NIR}{{}}_{NIR}^{}}}{t}$$ Q NIR = M NIR M NIR ′ t , where [12pt]{minimal} $${{}}_{NIR}$$ M NIR is a 2,250 (number of observations) × 8 (number of selected principal components) matrix, and [12pt]{minimal} $${ }_{{}}^{2}$$ σ v 2 the estimated NIR wavelength variance. The principal components were used since it improved convergence of the REML algorithm used for VCs estimation compared to when NIR wavelengths were used directly. For step 2 of GOBLUP-NIR, the NIR wavelengths estimated effects were defined as [12pt]{minimal} $$}}$$ v ^ (equivalent to [12pt]{minimal} $$}}$$ u ^ in GOBLUP-MI). The GOBLUP-NIR was used for all traits except PC, as PC is directly predicted from NIRS. Note that in both models, GOBLUP-MI and GOBLUP-NIR, independence and equal heritabilities of all omics features are assumed. The GBLUP model (Habier et al. ; VanRaden ) was utilized for the traits GY, PC, EY, WV, WC, FS, and BG. For analyses of GY and PC, GBLUP was defined with the same model effects as described in the previous subsection for Model 1. For analyses of MQ traits EY, WV, WC, FS, and BG, GBLUP was defined as previously described for Model 2. The GOBLUP model was proposed by Christensen et al. to integrate different omics data into genetic evaluations. In brief, the MBLUP is a joint model assuming that the phenotype of interest is affected by different omics expression levels (in addition to genomic effects), and where GEBVs can be obtained from a combination of genomic and omics information. See Christensen et al. for a complete theoretical description of the model and derivation of GEBVs. In our study, GOBLUP-MI refers to the model including MIs and GOBLUP-NIR to the model including NIR wavelengths. The model of Christensen et al. (GOBLUP-MI) for GY and PC was implemented using the following two steps: step 1: 2 [12pt]{minimal} $$y = Xb_{1} + u + Z_{g} g_{1} + Z_{l} l_{1} + Z_{{i_{g} }} i_{g1} + Z_{{i_{l} }} i_{l1} + Z_{s} s_{1} + e_{1}$$ y = X b 1 + u + Z g g 1 + Z l l 1 + Z i g i g 1 + Z i l i l 1 + Z s s 1 + e 1 step 2: 3 [12pt]{minimal} $$ = Xb_{2} + Z_{g} g_{2} + Z_{l} l_{2} + Z_{{i_{g} }} i_{g2} + Z_{{i_{l} }} i_{l2} + Z_{s} s_{2} + e_{2}$$ u ^ = X b 2 + Z g g 2 + Z l l 2 + Z i g i g 2 + Z i l i l 2 + Z s s 2 + e 2 where [12pt]{minimal} $${}$$ y is the vector of phenotypes, [12pt]{minimal} $${}$$ X , [12pt]{minimal} $${}$$ b , [12pt]{minimal} $${{}}_{g}$$ Z g , [12pt]{minimal} $${{}}_{l}$$ Z l , [12pt]{minimal} $${{}}_{{i}_{g}}$$ Z i g , [12pt]{minimal} $${{}}_{{i}_{l}}$$ Z i l , [12pt]{minimal} $${{}}_{s}$$ Z s , [12pt]{minimal} $${}$$ g , [12pt]{minimal} $${}$$ l , [12pt]{minimal} $${{}}_{g}$$ i g , [12pt]{minimal} $${{}}_{l}$$ i l , [12pt]{minimal} $${}$$ s , [12pt]{minimal} $${}$$ e are defined as for GBLUP, regardless of whether it is denoted with a subscript 1 or 2; [12pt]{minimal} $${}$$ u is the vector of MIs effects on phenotype with [12pt]{minimal} $${} N(0,{{}}_{MI}{ }_{{}}^{2})$$ u ∼ N ( 0 , Q MI σ u 2 ) , where [12pt]{minimal} $${{}}_{MI}$$ Q MI is the metabolomic similarity matrix computed as [12pt]{minimal} $${{}}_{MI}=}}_{MI}{{}}_{MI}^{}}}{q}$$ Q MI = M MI M MI ′ q with [12pt]{minimal} $${{}}_{MI}$$ M MI a 2,250 (number of observations) × 30,468 (number of MIs) matrix of centered and scaled MIs, and [12pt]{minimal} $${ }_{{}}^{2}$$ σ u 2 the metabolomic variance. In the step 2 (Eq. ), [12pt]{minimal} $$}}$$ u ^ is the vector of predicted metabolomic effect from step 1 (Eq. ). The vector of GEBVs in GOBLUP-MI are therefore computed as the vector of GEBV in step 1 ( [12pt]{minimal} $${}}}_{1}$$ g ^ 1 ) plus the vector of GEBVs in step 2 ( [12pt]{minimal} $${}}}_{2}$$ g ^ 2 ). The GOBLUP-MI for MQ traits (EY, WV, WC, FS, and BG) was defined with the same effects as for GY and PC plus an additional random effect [12pt]{minimal} $${{}}_{m}{{}}_{1}$$ Z m m 1 (for step 1) and [12pt]{minimal} $${{}}_{m}{{}}_{2}$$ Z m m 2 (for step 2), corresponding to the mashing batch group in which samples were malted, where [12pt]{minimal} $${{}}_{m}$$ Z m and [12pt]{minimal} $${}$$ m were defined as for GBLUP regardless of subscript 1 or 2. The GOBLUP-NIR was developed to include NIR wavelengths instead of MIs. This model had the same effects as GOBLUP-MI for all traits except for the metabolomic effects ( [12pt]{minimal} $${}$$ u ), which was replaced by a new effect for NIR wavelengths. For step 1 of GOBLUP-NIR, the NIR effect was defined as [12pt]{minimal} $${}$$ v (equivalent to [12pt]{minimal} $${}$$ u in GOBLUP-MI), with [12pt]{minimal} $${}$$ v as the vector of NIR wavelength effects on phenotype, where [12pt]{minimal} $${} N(0,{{}}_{NIR}{ }_{{}}^{2})$$ v ∼ N ( 0 , Q NIR σ v 2 ) . To build the NIR relationship matrix [12pt]{minimal} $${{}}_{NIR}$$ Q NIR , principal component analysis (PCA) was performed over the 141 centered and scaled NIR wavelengths for the complete population. The first eight principal components explained more than 99% of the variation and were utilized to compute [12pt]{minimal} $${{}}_{NIR}=}}_{NIR}{{}}_{NIR}^{}}}{t}$$ Q NIR = M NIR M NIR ′ t , where [12pt]{minimal} $${{}}_{NIR}$$ M NIR is a 2,250 (number of observations) × 8 (number of selected principal components) matrix, and [12pt]{minimal} $${ }_{{}}^{2}$$ σ v 2 the estimated NIR wavelength variance. The principal components were used since it improved convergence of the REML algorithm used for VCs estimation compared to when NIR wavelengths were used directly. For step 2 of GOBLUP-NIR, the NIR wavelengths estimated effects were defined as [12pt]{minimal} $$}}$$ v ^ (equivalent to [12pt]{minimal} $$}}$$ u ^ in GOBLUP-MI). The GOBLUP-NIR was used for all traits except PC, as PC is directly predicted from NIRS. Note that in both models, GOBLUP-MI and GOBLUP-NIR, independence and equal heritabilities of all omics features are assumed. The VCs estimation was performed using the AI-REML module in the DMU software (Madsen and Jensen, 2013). For the GBLUP models, [12pt]{minimal} $${}^{2}$$ h ^ 2 was computed using the same formulas as described for Models 1 and 2 in the section "Estimation of heritability of MIs and NIR wavelengths" ; note that all these models have similar effects, but are used for different phenotypes (i.e., MIs, NIR wavelengths, GY, PC, or MQ traits). The GOBLUP-MI allowed us to obtain different heritabilities than GBLUP. According to Christensen et al. the heritability in the GOBLUP can be defined as [12pt]{minimal} $${h}^{2}={h}_{d}^{2}+ {c}_{m}^{2}* {h}_{M}^{2}$$ h 2 = h d 2 + c m 2 ∗ h M 2 ; where [12pt]{minimal} $${h}_{d}^{2}$$ h d 2 is the direct heritability obtained from step 1 of MGLUP. For GOBLUP-MI, [12pt]{minimal} $${h}_{d}^{2}$$ h d 2 is estimated as [12pt]{minimal} $${}_{d}^{2}=d(){ }_{{{}}_{1}}^{2}/{ }_{{{}}_{1}}^{2}$$ h ^ d 2 = d G σ ^ g 1 2 / σ ^ P 1 2 , with [12pt]{minimal} $$d()$$ d G and [12pt]{minimal} $${}_{{{}}_{1}}^{2}$$ σ ^ g 1 2 as previously defined, and [12pt]{minimal} $${}_{{{}}_{1}}^{2}=d(){ }_{{{}}_{1}}^{2}+d({{}}_{MI}){ }_{{}}^{2}+ {}_{{{}}_{1}}^{2}+d(){ }_{{{}}_{{}1}}^{2}+{}_{{{}}_{{}1}}^{2}+{(d()-mean())*}_{{{}}_{1}}^{2}+{}_{{{}}_{1}}^{2}$$ σ ^ P 1 2 = d G σ ^ g 1 2 + d Q MI σ ^ u 2 + σ ^ l 1 2 + d G σ ^ i g 1 2 + σ ^ i l 1 2 + ( d S - m e a n S ) ∗ σ ^ s 1 2 + σ ^ e 1 2 ; [12pt]{minimal} $${c}_{m}^{2}$$ c m 2 is the metabolomics variance ratio, and is estimated as [12pt]{minimal} $${}_{m}^{2}=d({{}}_{MI}){ }_{{}}^{2}/{ }_{{{}}_{1}}^{2}$$ c ^ m 2 = d Q MI σ ^ u 2 / σ ^ P 1 2 ; and [12pt]{minimal} $${h}_{M}^{2}$$ h M 2 is the heritability of MIs, and is estimated as [12pt]{minimal} $${}_{M}^{2}=d(){ }_{{{}}_{2}}^{2}/{ }_{{{}}_{2}}^{2}$$ h ^ M 2 = d G σ ^ g 2 2 / σ ^ P 2 2 with [12pt]{minimal} $${}_{{{}}_{2}}^{2}=d(){ }_{{{}}_{2}}^{2}+{}_{{{}}_{2}}^{2}+d(){ }_{{{}}_{{}2}}^{2}+{}_{{{}}_{{}2}}^{2}+{(d()-mean())*}_{{{}}_{2}}^{2}+{}_{{{}}_{2}}^{2}.$$ σ ^ P 2 2 = d G σ ^ g 2 2 + σ ^ l 2 2 + d G σ ^ i g 2 2 + σ ^ i g 2 2 + ( d S - m e a n S ) ∗ σ ^ s 2 2 + σ ^ e 2 2 . For GOBLUP-NIR, parameters were calculated using the same formulas as for GOBLUP-MI, but replacing [12pt]{minimal} $${{}}_{MI}$$ Q MI with [12pt]{minimal} $${{}}_{NIR}$$ Q NIR for computing [12pt]{minimal} $${}_{{{}}_{1}}^{2}$$ σ ^ P 1 2 and [12pt]{minimal} $${}_{m}^{2}$$ c ^ m 2 (NIR wavelength variance ratio in GOBLUP-NIR); the [12pt]{minimal} $${h}_{M}^{2}$$ h M 2 in GOBLUP-NIR represents the heritability of NIR wavelengths. Predictions of GEBVs from GBLUP, GOBLUP-MI and GOBLUP-NIR were assessed using fivefold and leave-one-breeding-cycle-out (LBCO) CVs schemes. The fivefold CV consisted of randomly masking the phenotypes of all replicates of 20% of the lines and using the remaining lines to predict the additive genetic values. This process was repeated five times until all lines were assigned to one of the five folds and predicted. The fivefold CV is useful for predictions with high genetic relationship between reference population (RP) and validation population (VP) as relatives such as parents, siblings, and half-siblings can be shared between RP and VP. Thus, the fivefold allows us to investigate the performance of the genetic models in a scenario where a new variety is predicted via genotype but no phenotypic records is available. The LBCO CV was performed by masking the phenotypes of one breeding cycle in the VP and using the remaining phenotyped lines to predict the masked lines. This process was repeated twice to predict the breeding cycles evaluated in 2021 and 2022. The LBCO CV allowed us to evaluate the prediction problem where newly developed lines are predicted from parental and historical records. For GOBLUP-MI and GOBLUP-NIR both CVs were performed by masking both phenotypes and MIs (or NIR) information in the VP. The reason for masking MIs (or NIR) in VP is that environmental correlations may influence predictions if MIs (or NIR) wavelengths are not masked, and this would represent a similar scenario to the situation of a bivariate model with records for the secondary trait in VP (see Guo et al. for details). The models' predictive ability (PA) was evaluated as the correlation between GEBVs and phenotypes corrected by fixed effects ( [12pt]{minimal} $${{}}_{c}$$ y c ). In addition, the regression coefficient of GEBVs obtained with whole phenotypic information on GEBVs obtained with partial phenotypic information was used as an estimate of variance inflation: [12pt]{minimal} $${{}}_{w,p}=}_{w}, {}_{p})}{var({}_{p})}$$ b w , p = c o v ( g ^ w , g ^ p ) v a r ( g ^ p ) (Legarra and Reverter ). The standard errors for PA and [12pt]{minimal} $${{}}_{w,p}$$ b w , p were obtained using an ordinary non-parametric bootstrapping with replacement, full sample size, and 10,000 replications. The PA between models for each trait was contrasted using a two-tailed paired t -test (critical P-value = 0.01). Different scenarios comparing ratios of population accuracies of predicted breeding values were assessed according to Lagarra and Reverter . The ratios of accuracies are a measure of the increase in accuracy when including additional information to the models and can be estimated as the correlation between breeding values estimated with whole ( [12pt]{minimal} $${}}}_{w}$$ g ^ w ) and partial information ( [12pt]{minimal} $${}}}_{p}$$ g ^ p ); the lower the correlation, the larger the increase in accuracy by adding new information. Different ratios of accuracies were computed and named according to the different information in VP (Fig. ). The ratios of accuracies computed were: i) GBLUPg/gp, representing the correlation of GEBV with genomic information in VP and GEBV with genomic and phenotypic information in VP, ii) GOBLUPg/gm as the correlation of GEBV with genomic in VP and GEBV with genomic and omics information in VP, iii) GOBLUPgm/gmp as the correlation of GEBV with genomic and omics in VP and GEBV with genomic, omics, and phenotypic information in VP, and iv) GOBLUPg/gmp as the correlation of GEBV with genomic in VP and GEBV with genomic, omics, and phenotypic information in VP. Estimates of heritability of metabolomic intensities and NIR wavelengths The estimates of [12pt]{minimal} $${h}^{2}$$ h 2 for each MI and each NIR wavelength are shown in Figs. and , respectively. For the other VCs, the relative proportions of VCs for each MIs and NIR wavelengths are shown in supplementary material 2. The distribution of [12pt]{minimal} $${}^{2}$$ h ^ 2 of MIs is displayed in the histogram in Fig. a. A wide range of [12pt]{minimal} $${}^{2}$$ h ^ 2 was observed, with 40.22% of values lower than 0.01 and a maximum value of 0.93. The average [12pt]{minimal} $${}^{2}$$ h ^ 2 of MIs was 0.08, with a median of 0.03 and a third quartile of 0.11. The [12pt]{minimal} $${}^{2}$$ h ^ 2 for each of the 30,468 MIs ordered by chemical shift is shown in Fig. b. Across the different regions of the chemical shift interval, a trend of [12pt]{minimal} $${}^{2}$$ h ^ 2 close to zero was seen for MIs at the beginning and end of the spectra, indicating that no biological signals were detected in these regions, 6.77% of MIs had moderate [12pt]{minimal} $${}^{2}$$ h ^ 2 from 0.2 to 0.5, and 2.91% of Mis had [12pt]{minimal} $${}^{2}$$ h ^ 2 larger than 0.5. The significance test for [12pt]{minimal} $${}^{2}$$ h ^ 2 of MIs based on determination of the null distribution of [12pt]{minimal} $${}^{2}$$ h ^ 2 , resulted in a significance level at [12pt]{minimal} $${}^{2}$$ h ^ 2 of 0.0123 estimated for a significance threshold of 0.01 (horizontal red-dashed line in Fig. b). A total of 17,677 out of 30,468 MIs (58.02%) significantly differed from zero in this statistical test. The distribution of [12pt]{minimal} $${}^{2}$$ h ^ 2 of NIR wavelengths is displayed in the histogram in Fig. a. This shows that 1.42% of [12pt]{minimal} $${}^{2}$$ h ^ 2 were lower than 0.01 and a maximum value of 0.48 was observed. The average and median [12pt]{minimal} $${}^{2}$$ h ^ 2 of NIR wavelengths were both 0.16, with a third quartile of 0.20. The [12pt]{minimal} $${}^{2}$$ h ^ 2 for each of the 141 NIR wavelengths ordered by absorbance is shown in Fig. b. Across the different regions of the NIR spectra, 25.53% of wavelengths had moderate [12pt]{minimal} $${}^{2}$$ h ^ 2 (> 0.2). In comparison with MIs, the NIR wavelengths had a lower proportion of very low or high [12pt]{minimal} $${}^{2}$$ h ^ 2 , and most of them were in the intermediate range. Variance components and heritabilities using GBLUP and GOBLUP models for yield and malting quality traits The GBLUP, GOBLUP-MI and GOBLUP-NIR models were utilized to estimate VCs and population parameters for GY, PC (except for GOBLUP-NIR), and MQ traits. The estimates of genetic ( [12pt]{minimal} $${}}}_{{}}^{2}$$ σ ^ g 2 ) and phenotypic ( [12pt]{minimal} $${}}}_{{}}^{2}$$ σ ^ P 2 ) variances are shown in Table . The other VCs and the standard deviations of estimates are shown in supplementary material 3. The VCs were successfully estimated with GBLUP and GOBLUP-MI for all traits. The GOBLUP-NIR was successfully utilized to estimate VCs for GY, WV, BG, and EY, but it was not possible for FS and WC due to the estimated NIR variance ( [12pt]{minimal} $${}_{v}^{2}$$ σ ^ v 2 ) converging towards zero. The [12pt]{minimal} $${}}}_{{}}^{2}$$ σ ^ g 2 for GBLUP was higher than the direct genomic variance of GOBLUP-MI ( [12pt]{minimal} $${}}}_{{}}^{2}$$ σ ^ g 2 in step 1) for GY and PC. This trend was not observed for MQ traits, where in general the [12pt]{minimal} $${}}}_{{}}^{2}$$ σ ^ g 2 for GBLUP and the direct genomic variance of GOBLUP-MI were similar. The direct genomic variance of GOBLUP-NIR was slightly higher than that for GBLUP and GOBLUP-MI for GY, and it was slightly lower for the MQ traits. For each trait, the [12pt]{minimal} $${}}}_{{}}^{2}$$ σ ^ P 2 differed between models, with the largest differences observed for BG and EY, where [12pt]{minimal} $${}}}_{{}}^{2}$$ σ ^ P 2 for GOBLUP-NIR was considerably larger than for the other models. The [12pt]{minimal} $${}}}_{{}}^{2}$$ σ ^ P 2 for BG and EY in the GOBLUP-NIR were higher than the raw phenotypic variance of the trait (Table ). The [12pt]{minimal} $${}}}_{{}}^{2}$$ σ ^ P 2 for GBLUP and GOBLUP-MI were in general in a similar range for GY, PC, WV, and WC, but it was higher for the GBLUP for BG and FS, and higher for the GOBLUP-MI for EY. The estimates of genetic parameters and heritabilities for GBLUP, GOBLUP-MI, and GOBLUP-NIR models are shown in Table . The heritabilities varied depending on the model used; the highest values were obtained for WC (0.38–0.38), followed by WV (0.21–0.22), BG (0.18–0.33), EY (0.11–0.26), GY (0.11–0.13), PC (0.08), and the lowest was obtained for FS (0.02). Note that the [12pt]{minimal} $${}^{2}$$ h ^ 2 estimated from GOBLUP models is the sum of a direct heritability ( [12pt]{minimal} $${}_{d}^{2}$$ h ^ d 2 ) and an omics-mediated heritability equal to the product of [12pt]{minimal} $${}_{m}^{2}$$ c ^ m 2 and [12pt]{minimal} $${}_{M}^{2}$$ h ^ M 2 (decomposed in Table ). The proportion of [12pt]{minimal} $${}_{m}^{2}$$ c ^ m 2 for GOBLUP-MI was higher for GY (20.5%) and PC (25.5%) than for MQ traits (< 4%). The proportion of [12pt]{minimal} $${}_{m}^{2}$$ c ^ m 2 for GOBLUP-NIR was in a similar range to GOBLUP-MI for GY (22.3%), but it was considerably higher for WV (81.2%), BG (63.2%), and EY (85.3%). The [12pt]{minimal} $${}_{M}^{2}$$ h ^ M 2 for GOBLUP-MI was higher for WV (0.16) followed by FS (0.14), BG (0.14), GY (0.14), EY (0.11), and WC (0.04). The [12pt]{minimal} $${}_{M}^{2}$$ h ^ M 2 for GOBLUP-NIR was higher for BG (0.44), EY (0.28), WV (0.22), and GY (0.03). Comparing the two GOBLUP models, the GOBLUP-MI presented higher [12pt]{minimal} $${}^{2}$$ h ^ 2 for GY and WV, but the GOBLUP-NIR presented higher [12pt]{minimal} $${}^{2}$$ h ^ 2 for BG and EY, and both models presented from similar to higher [12pt]{minimal} $${}^{2}$$ h ^ 2 than GBLUP for the different traits. Prediction of breeding values The PA and [12pt]{minimal} $${{}}_{w,p}$$ b w , p for GBLUP, GOBLUP-MI and GOBLUP-NIR models were investigated using fivefold (Table ) and LBCO (Table ) CV schemes. In fivefold CV (Table ), the highest PA was observed for WC (0.70), followed by WV (0.37 to 0.48), GY (0.39 to 0.42), PC (0.37), BG (0.27 to 0.32), EY (0.27 to 0.32), and the lowest for FS (0.23). As a general trend, the PA between GBLUP and GOBLUP-MI was similar, and it was significantly higher for several traits than the PA for GOBLUP-NIR. For PC, the highest PA was obtained for GOBLUP-MI and it was 1.4% higher than for GBLUP (not significant at a critical P-value of 0.01). For WV, the highest PA was obtained for GBLUP and GOBLUP-MI, which was significantly higher (~ 30%) than for GOBLUP-NIR. A similar trend as in WV was observed for BG and EY, with significantly higher PA for GBLUP and GOBLUP-MI (~ 17%) compared to GOBLUP-NIR. However, for GY the highest PA was obtained using the GOBLUP-NIR model, followed by the GBLUP and GOBLUP-MI models. Here, the increase in PA provided by GOBLUP-NIR was low, representing a 2.4% increase compared to GBLUP and 8.3% compared to GOBLUP-MI; the differences for GY were not significant in the t -test (critical P-value of 0.01). No relevant differences in PA were observed between models for BG, FS, and WC. In the fivefold CV, the [12pt]{minimal} $${{}}_{w,p}$$ b w , p did not signal any significant under- or -over dispersion since values were around 1 for all models (Table ). In LBCO CV (Table ), PAs were lower than in the fivefold CV. The highest PA was observed for WC (0.60), followed by WV (0.33 to 0.43), BG (0.25 to 0.29), EY (0.26 to 0.27), PC (0.22), GY (0.10 to 0.13), and FS (0.10). Similarly to fivefold CV, in LBCO there was a general trend of similar PA between GBLUP and GOBLUP-MI, and both models significantly outperformed the PA of GOBLUP-NIR for several traits. For GY, the highest PA was obtained using the GBLUP and GOBLUP-NIR models, which was ~ 39% higher than for GOBLUP-MI; these differences were significant ( P-value < 0.01) between GBLUP and GOBLUP-MI but not between GOBLUP-NIR and GOBLUP-MI. For PC, the highest PA was obtained for GOBLUP-MI and represented a no significant increase of 4.6% compared to GBLUP. For WV, the highest PA was obtained using the GBLUP and GOBLUP-MI models and it was ~ 33% higher than for GOBLUP-NIR; these differences were significant ( P-value < 0.01) between GOBLUP-MI and GOBLUP-NIR but not between GBLUP and GOBLUP-NIR. A similar trend as in WV was observed for BG, with the highest PA for GBLUP and GOBLUP-MI and an increase of 14.6% compared to GOBLUP-NIR, but differences were not significant between models. No relevant differences in PA were observed between models for BG, FS, and WC. The estimate for [12pt]{minimal} $${{}}_{w,p}$$ b w , p in LBCO differed depending on the trait and model. No significant under- or -over dispersion was observed for most traits; but some over-dispersion was observed for GY. The ratios of population accuracies of predicted breeding values obtained with the LR method for fivefold and LBCO CV are presented in Tables and , respectively. Ratios of accuracies close to one reveal that no relevant improvement by including new information for prediction is obtained. We observed that the ratios for fivefold CV (0.85 to 1.0) were higher than for the LBCO CV (0.61 to 1.0) for all traits. In fivefold CV, a moderate improvement was observed for including phenotypic information in VP for all models and traits (ratios from 0.87 to 0.95 in GBLUPgm/gmp and GOBLUP s gm/gmp). A similar trend but with a larger effect of including phenotypes in VP was observed in LBCO CV (ratios from 0.62 to 0.86 in GBLUPgm/gmp and GOBLUP s gm/gmp). The impact of including omics in VP can be observed by looking at the GOBLUPg/gm ratio. From GOBLUPg/gm, it can be observed that there was no relevant effect of including MI in any of the two CVs for GOBLUP-MI (GOBLUP-MIg/gm ratio ~ 1 for all traits). Similar results were observed for the GOBLUP-NIRg/gm ratio for GY in both CVs. The GOBLUP-NIRg/gm ratio for WV, BG and EY were about 0.95 for GOBLUP-NIR in fivefold and 0.90 for LBCO. In principle, it may suggest an improvement by using NIR wavelengths in GOBLUP-NIR, but considering the lower PA of GOBLUP-NIR for these traits, this improvement does not seem to represent an extra benefit of using NIR wavelengths in the predictive performance compared to the baseline GBLUP or GOBLUP-MI models. The estimates of [12pt]{minimal} $${h}^{2}$$ h 2 for each MI and each NIR wavelength are shown in Figs. and , respectively. For the other VCs, the relative proportions of VCs for each MIs and NIR wavelengths are shown in supplementary material 2. The distribution of [12pt]{minimal} $${}^{2}$$ h ^ 2 of MIs is displayed in the histogram in Fig. a. A wide range of [12pt]{minimal} $${}^{2}$$ h ^ 2 was observed, with 40.22% of values lower than 0.01 and a maximum value of 0.93. The average [12pt]{minimal} $${}^{2}$$ h ^ 2 of MIs was 0.08, with a median of 0.03 and a third quartile of 0.11. The [12pt]{minimal} $${}^{2}$$ h ^ 2 for each of the 30,468 MIs ordered by chemical shift is shown in Fig. b. Across the different regions of the chemical shift interval, a trend of [12pt]{minimal} $${}^{2}$$ h ^ 2 close to zero was seen for MIs at the beginning and end of the spectra, indicating that no biological signals were detected in these regions, 6.77% of MIs had moderate [12pt]{minimal} $${}^{2}$$ h ^ 2 from 0.2 to 0.5, and 2.91% of Mis had [12pt]{minimal} $${}^{2}$$ h ^ 2 larger than 0.5. The significance test for [12pt]{minimal} $${}^{2}$$ h ^ 2 of MIs based on determination of the null distribution of [12pt]{minimal} $${}^{2}$$ h ^ 2 , resulted in a significance level at [12pt]{minimal} $${}^{2}$$ h ^ 2 of 0.0123 estimated for a significance threshold of 0.01 (horizontal red-dashed line in Fig. b). A total of 17,677 out of 30,468 MIs (58.02%) significantly differed from zero in this statistical test. The distribution of [12pt]{minimal} $${}^{2}$$ h ^ 2 of NIR wavelengths is displayed in the histogram in Fig. a. This shows that 1.42% of [12pt]{minimal} $${}^{2}$$ h ^ 2 were lower than 0.01 and a maximum value of 0.48 was observed. The average and median [12pt]{minimal} $${}^{2}$$ h ^ 2 of NIR wavelengths were both 0.16, with a third quartile of 0.20. The [12pt]{minimal} $${}^{2}$$ h ^ 2 for each of the 141 NIR wavelengths ordered by absorbance is shown in Fig. b. Across the different regions of the NIR spectra, 25.53% of wavelengths had moderate [12pt]{minimal} $${}^{2}$$ h ^ 2 (> 0.2). In comparison with MIs, the NIR wavelengths had a lower proportion of very low or high [12pt]{minimal} $${}^{2}$$ h ^ 2 , and most of them were in the intermediate range. The GBLUP, GOBLUP-MI and GOBLUP-NIR models were utilized to estimate VCs and population parameters for GY, PC (except for GOBLUP-NIR), and MQ traits. The estimates of genetic ( [12pt]{minimal} $${}}}_{{}}^{2}$$ σ ^ g 2 ) and phenotypic ( [12pt]{minimal} $${}}}_{{}}^{2}$$ σ ^ P 2 ) variances are shown in Table . The other VCs and the standard deviations of estimates are shown in supplementary material 3. The VCs were successfully estimated with GBLUP and GOBLUP-MI for all traits. The GOBLUP-NIR was successfully utilized to estimate VCs for GY, WV, BG, and EY, but it was not possible for FS and WC due to the estimated NIR variance ( [12pt]{minimal} $${}_{v}^{2}$$ σ ^ v 2 ) converging towards zero. The [12pt]{minimal} $${}}}_{{}}^{2}$$ σ ^ g 2 for GBLUP was higher than the direct genomic variance of GOBLUP-MI ( [12pt]{minimal} $${}}}_{{}}^{2}$$ σ ^ g 2 in step 1) for GY and PC. This trend was not observed for MQ traits, where in general the [12pt]{minimal} $${}}}_{{}}^{2}$$ σ ^ g 2 for GBLUP and the direct genomic variance of GOBLUP-MI were similar. The direct genomic variance of GOBLUP-NIR was slightly higher than that for GBLUP and GOBLUP-MI for GY, and it was slightly lower for the MQ traits. For each trait, the [12pt]{minimal} $${}}}_{{}}^{2}$$ σ ^ P 2 differed between models, with the largest differences observed for BG and EY, where [12pt]{minimal} $${}}}_{{}}^{2}$$ σ ^ P 2 for GOBLUP-NIR was considerably larger than for the other models. The [12pt]{minimal} $${}}}_{{}}^{2}$$ σ ^ P 2 for BG and EY in the GOBLUP-NIR were higher than the raw phenotypic variance of the trait (Table ). The [12pt]{minimal} $${}}}_{{}}^{2}$$ σ ^ P 2 for GBLUP and GOBLUP-MI were in general in a similar range for GY, PC, WV, and WC, but it was higher for the GBLUP for BG and FS, and higher for the GOBLUP-MI for EY. The estimates of genetic parameters and heritabilities for GBLUP, GOBLUP-MI, and GOBLUP-NIR models are shown in Table . The heritabilities varied depending on the model used; the highest values were obtained for WC (0.38–0.38), followed by WV (0.21–0.22), BG (0.18–0.33), EY (0.11–0.26), GY (0.11–0.13), PC (0.08), and the lowest was obtained for FS (0.02). Note that the [12pt]{minimal} $${}^{2}$$ h ^ 2 estimated from GOBLUP models is the sum of a direct heritability ( [12pt]{minimal} $${}_{d}^{2}$$ h ^ d 2 ) and an omics-mediated heritability equal to the product of [12pt]{minimal} $${}_{m}^{2}$$ c ^ m 2 and [12pt]{minimal} $${}_{M}^{2}$$ h ^ M 2 (decomposed in Table ). The proportion of [12pt]{minimal} $${}_{m}^{2}$$ c ^ m 2 for GOBLUP-MI was higher for GY (20.5%) and PC (25.5%) than for MQ traits (< 4%). The proportion of [12pt]{minimal} $${}_{m}^{2}$$ c ^ m 2 for GOBLUP-NIR was in a similar range to GOBLUP-MI for GY (22.3%), but it was considerably higher for WV (81.2%), BG (63.2%), and EY (85.3%). The [12pt]{minimal} $${}_{M}^{2}$$ h ^ M 2 for GOBLUP-MI was higher for WV (0.16) followed by FS (0.14), BG (0.14), GY (0.14), EY (0.11), and WC (0.04). The [12pt]{minimal} $${}_{M}^{2}$$ h ^ M 2 for GOBLUP-NIR was higher for BG (0.44), EY (0.28), WV (0.22), and GY (0.03). Comparing the two GOBLUP models, the GOBLUP-MI presented higher [12pt]{minimal} $${}^{2}$$ h ^ 2 for GY and WV, but the GOBLUP-NIR presented higher [12pt]{minimal} $${}^{2}$$ h ^ 2 for BG and EY, and both models presented from similar to higher [12pt]{minimal} $${}^{2}$$ h ^ 2 than GBLUP for the different traits. The PA and [12pt]{minimal} $${{}}_{w,p}$$ b w , p for GBLUP, GOBLUP-MI and GOBLUP-NIR models were investigated using fivefold (Table ) and LBCO (Table ) CV schemes. In fivefold CV (Table ), the highest PA was observed for WC (0.70), followed by WV (0.37 to 0.48), GY (0.39 to 0.42), PC (0.37), BG (0.27 to 0.32), EY (0.27 to 0.32), and the lowest for FS (0.23). As a general trend, the PA between GBLUP and GOBLUP-MI was similar, and it was significantly higher for several traits than the PA for GOBLUP-NIR. For PC, the highest PA was obtained for GOBLUP-MI and it was 1.4% higher than for GBLUP (not significant at a critical P-value of 0.01). For WV, the highest PA was obtained for GBLUP and GOBLUP-MI, which was significantly higher (~ 30%) than for GOBLUP-NIR. A similar trend as in WV was observed for BG and EY, with significantly higher PA for GBLUP and GOBLUP-MI (~ 17%) compared to GOBLUP-NIR. However, for GY the highest PA was obtained using the GOBLUP-NIR model, followed by the GBLUP and GOBLUP-MI models. Here, the increase in PA provided by GOBLUP-NIR was low, representing a 2.4% increase compared to GBLUP and 8.3% compared to GOBLUP-MI; the differences for GY were not significant in the t -test (critical P-value of 0.01). No relevant differences in PA were observed between models for BG, FS, and WC. In the fivefold CV, the [12pt]{minimal} $${{}}_{w,p}$$ b w , p did not signal any significant under- or -over dispersion since values were around 1 for all models (Table ). In LBCO CV (Table ), PAs were lower than in the fivefold CV. The highest PA was observed for WC (0.60), followed by WV (0.33 to 0.43), BG (0.25 to 0.29), EY (0.26 to 0.27), PC (0.22), GY (0.10 to 0.13), and FS (0.10). Similarly to fivefold CV, in LBCO there was a general trend of similar PA between GBLUP and GOBLUP-MI, and both models significantly outperformed the PA of GOBLUP-NIR for several traits. For GY, the highest PA was obtained using the GBLUP and GOBLUP-NIR models, which was ~ 39% higher than for GOBLUP-MI; these differences were significant ( P-value < 0.01) between GBLUP and GOBLUP-MI but not between GOBLUP-NIR and GOBLUP-MI. For PC, the highest PA was obtained for GOBLUP-MI and represented a no significant increase of 4.6% compared to GBLUP. For WV, the highest PA was obtained using the GBLUP and GOBLUP-MI models and it was ~ 33% higher than for GOBLUP-NIR; these differences were significant ( P-value < 0.01) between GOBLUP-MI and GOBLUP-NIR but not between GBLUP and GOBLUP-NIR. A similar trend as in WV was observed for BG, with the highest PA for GBLUP and GOBLUP-MI and an increase of 14.6% compared to GOBLUP-NIR, but differences were not significant between models. No relevant differences in PA were observed between models for BG, FS, and WC. The estimate for [12pt]{minimal} $${{}}_{w,p}$$ b w , p in LBCO differed depending on the trait and model. No significant under- or -over dispersion was observed for most traits; but some over-dispersion was observed for GY. The ratios of population accuracies of predicted breeding values obtained with the LR method for fivefold and LBCO CV are presented in Tables and , respectively. Ratios of accuracies close to one reveal that no relevant improvement by including new information for prediction is obtained. We observed that the ratios for fivefold CV (0.85 to 1.0) were higher than for the LBCO CV (0.61 to 1.0) for all traits. In fivefold CV, a moderate improvement was observed for including phenotypic information in VP for all models and traits (ratios from 0.87 to 0.95 in GBLUPgm/gmp and GOBLUP s gm/gmp). A similar trend but with a larger effect of including phenotypes in VP was observed in LBCO CV (ratios from 0.62 to 0.86 in GBLUPgm/gmp and GOBLUP s gm/gmp). The impact of including omics in VP can be observed by looking at the GOBLUPg/gm ratio. From GOBLUPg/gm, it can be observed that there was no relevant effect of including MI in any of the two CVs for GOBLUP-MI (GOBLUP-MIg/gm ratio ~ 1 for all traits). Similar results were observed for the GOBLUP-NIRg/gm ratio for GY in both CVs. The GOBLUP-NIRg/gm ratio for WV, BG and EY were about 0.95 for GOBLUP-NIR in fivefold and 0.90 for LBCO. In principle, it may suggest an improvement by using NIR wavelengths in GOBLUP-NIR, but considering the lower PA of GOBLUP-NIR for these traits, this improvement does not seem to represent an extra benefit of using NIR wavelengths in the predictive performance compared to the baseline GBLUP or GOBLUP-MI models. The present study used a commercial spring barley breeding population phenotyped for yield, grain protein content, and malting quality traits to investigate the following three research questions. First, we investigated the genetic variation and heritabilities for MIs and NIR wavelengths, and we found a significant proportion of MIs and NIR wavelengths presenting medium to high additive genetic variance and [12pt]{minimal} $${}^{2}$$ h ^ 2 . Second, we assessed the performance of genetic models, including genomic and metabolomic intensities (GOBLUP-MI), or genomic and NIR wavelengths (GOBLUP-NIR), to estimate VCs and heritabilities for all the available traits. We found that GOBLUP-MI and GOBLUP-NIR increase the proportion of genetic variance explained by the model for grain yield, grain protein content, malt extract yield, and β-glucan content. Third, we evaluated the performance of the developed models to predict breeding values, and we generally observed a similar accuracy between GBLUP and GOBLUP-MI, and a worse accuracy for GOBLUP-NIR. Despite this general trend, GOBLUP-MI and GOBLUP-NIR enhanced predictive ability by 4.6 and 2.4% for grain protein in leave-one-breeding-cycle-out and grain yield in fivefold cross-validations, respectively, compared to a baseline GBLUP model; although these differences between models were not statistically significant in a t -test (critical P-value of 0.01). Heritability of metabolomic intensities and NIR wavelengths The [12pt]{minimal} $${h}^{2}$$ h 2 was investigated using univariate analysis for each of the 30,468 MIs and 141 NIR wavelengths. We observed that [12pt]{minimal} $${}^{2}$$ h ^ 2 of MIs varied from values < 0.01 to 0.93, and we identified that the heritability of 17,677 MIs (58.02%) were statistically significantly different from zero (Fig. ). Guo et al. , analyzed a similar barley dataset from the same breeding company, but with MIs obtained from wort (instead of leaf tissue as in our study). In comparison to our study, both studies had a distribution with a high proportion of low [12pt]{minimal} $${}^{2}$$ h ^ 2 values < 0.1, but in our case, we found a higher proportion of moderate [12pt]{minimal} $${}^{2}$$ h ^ 2 with values from 0.2 to 0.5 and high [12pt]{minimal} $${}^{2}$$ h ^ 2 with [12pt]{minimal} $${}^{2}$$ h ^ 2 > 0.5. In Guo et al. , they found that 35.82% out of 24,018 MIs were significantly different from zero. Despite both studies having defined the significant threshold at 0.01, and having a relevant proportion of significant MIs, the percentage reported by Guo et al. was lower than in our case. The different statistical power to detect significant [12pt]{minimal} $${}^{2}$$ h ^ 2 may influence the results, as in our study greater statistical power is expected due to a larger sample size. Also, Guo et al. included three years of information in the analysis instead of two, which could lead to higher environmental variance and decrease estimates of [12pt]{minimal} $${}^{2}$$ h ^ 2 . The differences between studies might also be explained by having used a different tissue to obtain MIs. Heritability of MIs has also been investigated for other species using different tissues such as fruits and leaves of coffee (Gamboa-Becerra et al. ), milk and blood in Holstein cattle (Wittenburg et al., ; Aliakbari et al. ) and plasma and serum in humans (Frahnow et al. ; Hagenbeek et al. ), and variable [12pt]{minimal} $${}^{2}$$ h ^ 2 for MIs (ranging from 0 to > 0.5 for the different tissues) have been found for the different tissues and species. According to our significance threshold estimated at 0.0123, the 41.8% of [12pt]{minimal} $${}^{2}$$ h ^ 2 of MIs were not significantly different from zero. The large proportion of non-significant [12pt]{minimal} $${}^{2}$$ h ^ 2 are somehow expected since we used all the available NMR spectra, where some regions may include MI with low or no biological signals (similarly observed by Aliakbari et al. , and Guo et al. ). Despite that, using the wide range of MIs may ensure that all potential biological signals can be detected. The [12pt]{minimal} $${}^{2}$$ h ^ 2 of NIR wavelengths in our study ranged from < 0.01 to 0.48 (Fig. ). Rincent et al. investigated the broad-sense heritability of NIR for leaf and grain tissue of wheat and wood tissue in poplar. These authors reported that the broad-sense heritability was highly variable along the spectrum, with peaks above 60% for both tissues. Similar values of broad-sense heritability of NIR wavelengths from grain tissue in in wheat were found in Robert et al. . The articles of Rincent et al. and Robert et al. estimated broad-sense heritability instead of narrow-sense heritability. Our models also allow us to estimate broad-sense heritability by using the total estimated genetic variance (i.e. [12pt]{minimal} $$d(){}_{{g}_{j}}^{2}+{}_{{l}_{j}}^{2}$$ d G σ ^ g j 2 + σ ^ l j 2 ), and the highest peaks of broad-sense heritability were between 40 and 60%. A possible explanation for the higher values obtained in Rincent et al. and Robert et al. may be related to different species involved in the studies. Our result presented on [12pt]{minimal} $${}^{2}$$ h ^ 2 are novel, as this is the first report on [12pt]{minimal} $${}^{2}$$ h ^ 2 of MIs for leaf tissue in barley and NIR wavelengths in whole grain after malting. Population parameters estimated with genomic and multi-omics models for yield and malting quality traits Population parameters for GY, PC and MQ traits were estimated with GBLUP and GOBLUP-MI models, and for GY and MQ traits with GOBLUP-NIR. The estimated phenotypic variance ( [12pt]{minimal} $${}}}_{{}}^{2}$$ σ ^ P 2 ) was larger for GOBLUP models than for GBLUP. This occurs because the MIs (GOBLUP-MI) and NIR wavelengths (GOBLUP-NIR) capture environmental variance from year-location-trial, that in GBLUP models would be captured by the fixed effects. For GOBLUP-NIR, we observed that the [12pt]{minimal} $${}}}_{{}}^{2}$$ σ ^ P 2 for WV, BG, and EY was higher than the variances of the traits. This issue was also observed, and even more extreme, in Guo et al. for MQ traits using a GOBLUP model with MIs from wort; possible reasons were thoroughly discussed by these authors and were related to possible wrong model assumptions. According to Christensen et al. and Guo et al. , possible model deficiencies could be related to: i) wrongly assuming independence and constant heritabilities of omics effects; note that this assumption is needed when using the joint model for prediction of breeding values, but we observed different [12pt]{minimal} $${}^{2}$$ h ^ 2 for MIs and NIR across the spectra; and ii) assumption of additivity of omics features, which implies that the similarity matrices [12pt]{minimal} $${{}}_{MI}$$ Q MI and [12pt]{minimal} $${{}}_{NIR}$$ Q NIR are matrix cross products of MI and selected principal components of NIR, respectively. Comparing the GBLUP and GOBLUP-MI, a lower direct genomic variance was observed for GOBLUP-MI for GY and PC. The lower direct genomic variance can be explained due to part of the genetic variance being captured by MIs ( [12pt]{minimal} $${}$$ u effect in step 1). A similar and even more extreme trend has been recently observed by Guo et al. for GOBLUP for MQ traits in barley (Guo et al. ). The partition of VCs in GOBLUP-MI revealed that a large proportion of the total variance was captured by MIs for GY and PC (measured by [12pt]{minimal} $${}}}_{{}}^{2}$$ c ^ m 2 > 0.20). This trend was not observed for MQ traits, where [12pt]{minimal} $${}}}_{{}}^{2}$$ c ^ m 2 was lower than 0.05 for all traits. Previous reports using GOBLUP with MIs have found higher values of [12pt]{minimal} $${}}}_{{}}^{2}$$ c ^ m 2 for MQ traits (Guo et al. , ). However, an important difference between these studies and ours is that they have used MIs from wort; and therefore, MIs are more directly related to MQ traits. For GY and PC, we see that a substantial metabolome-mediated heritability was obtained with GOBLUP-MI, resulting in higher [12pt]{minimal} $${}^{2}$$ h ^ 2 than for GBLUP. The differences in [12pt]{minimal} $${}^{2}$$ h ^ 2 between GBLUP and GOBLUP-MI were smaller for MQ traits. Considering the metabolomic mediated heritability ( [12pt]{minimal} $${h}_{M}^{2}$$ h M 2 ) estimated for the different traits, a greater potential to include MIs for prediction can be expected for GY and PC than for MQ traits. Additional VC analyses were performed with GOBLUP-MI that tested the performance of using only significant MIs or removing the first 4000 and last 1000 low-signal MIs, but no relevant differences were observed in VCs compared to using all MIs. For GOBLUP-NIR, a large proportion of total variance was captured by NIR wavelengths for GY, WV, BG, and EY, where [12pt]{minimal} $${}}}_{{}}^{2}$$ c ^ m 2 was higher than in GOBLUP-MI for the four traits. The [12pt]{minimal} $${}}}_{{}}^{2}$$ c ^ m 2 in GOBLUP-NIR was especially large for WV ( [12pt]{minimal} $${}}}_{{}}^{2}$$ c ^ m 2 = 0.81), BG ( [12pt]{minimal} $${}}}_{{}}^{2}$$ c ^ m 2 = 0.63), and EY ( [12pt]{minimal} $${}}}_{{}}^{2}$$ c ^ m 2 = 0.85). The large [12pt]{minimal} $${}}}_{{}}^{2}$$ c ^ m 2 for these traits could be related to a high correlation between NIR wavelengths and grain composition, which is well-established in the literature, and NIR wavelengths are commonly used to predict grain composition and quality in cereals (Dowell et al. ; Osborne ). In our study, NIR wavelengths have been optimized to predict grain protein content, but still, they may be potentially highly correlated to other quality traits than protein content. Similarly to GOBLUP-MI, the direct genomic variance in GOBLUP-NIR was reduced for WV, BG, ET, and FS, but unexpectedly, an opposite trend was observed for GY. A hypothesis for this opposite trend is that the NIR wavelengths may help to improve the partition of environmental, genetic, and genotype-by-environment interaction effects, resulting in more genetic variance captured by the genomic effect for GY. This could also be true for WV, BG, ET, and FS, but it may be more relevant for traits with higher genotype-by-environment interaction as GY (see VCs in supplementary material 3). Conversely, this could also be related to an upward-biased estimate of direct genomic variance due to wrong model assumptions as described above. Comparing the [12pt]{minimal} $${}^{2}$$ h ^ 2 obtained with GOBLUP-NIR and the other models, it was intermediate for GY, highest for BG and EY, and lowest for WV. The GOBLUP-NIR failed to reach convergence of the REML algorithm for FS and WC due to the NIR variance converging towards zero. Several alternatives to get estimates for those traits were utilized, such as trying different starting values for the REML algorithm, using raw or normalized NIR wavelengths instead of the Savitzky-Golay transformation, and using NIR wavelengths of raw grain instead of whole grain after malting, but none of these attempts helped to get VCs for these traits. This could mean that the NIR variance for these traits is not significant and the NIR effect could be excluded from the model, which result in a GBLUP model. Note that principal components were used for NIR wavelengths in GOBLUP-NIR; this strategy is different from the study by Rincent et al. , where they directly used all NIR wavelengths. We believe that no relevant differences in the results should be expected by using all NIR wavelengths or principal components as they explained more than 99% of the variation in NIR wavelengths. However, using the principal components was convenient to facilitate convergence in our study. Genomic and multi-omics prediction The predictive ability (PA), ratio of accuracies according to Legarra and Reverter , and variance dispersion of GEBV ( [12pt]{minimal} $${{}}_{w,p}$$ b w , p ) were evaluated in fivefold and LBCO CVs for GOBLUPs and GBLUP models. The fivefold CV allows us to investigate predictions of models in a favorable scenario as close relatives such as parents, siblings, and half-siblings can be shared between RP and VP, increasing genetic connections between RP and VP. The LBCO CV better reflects the practical conditions in a breeding program, where new lines must be predicted from historical information before the phenotypes are obtained. Thus, the genetic relationships between RP and VP in LBCO are much lower compared to fivefold, and lower PAs are expected (Shao ; Kohavi, ). On the other hand, different hypotheses can be tested depending on the CV used. In fivefold CV, the accuracy of new lines included in a breeding cycle that were genotyped but not phenotyped can be tested; this investigation is particularly relevant when genotyping is less expensive than phenotyping. The LBCO, alternatively, allows us to test the accuracy of predicting future performance given that lines are genotyped after single seed descent in F 4 . GBLUP and GOBLUP-MI generally exhibit a similar PA for both CVs, performing better than GOBLUP-NIR for most traits. Despite that, there were some specific cases in each CV where the PA of GOBLUP-MI and GOBLUP-NIR was higher than the PA of GBLUP. In fivefold CV, the GOBLUP-NIR revealed an increase of 2.4% for GY, and in LBCO CV the GOBLUP-MI revealed an increase of 4.6% for PC, although these differences were not statistically significant (critical P-value of 0.01). There were also some cases in which the GBLUP outperformed the PA of GOBLUP-MI (GY in both CVs and EY in LBCO CV) and GOBLUP-NIR (WV, BG, and EY in both CVs), but differences were only statistically significant between GBLUP and GOBLUP-NIR in fivefold CV for BG and EY. This trend of lack of improvement in PA with GBLUP-MI is consistent with results by Guo et al. . The reasons for the lower PA using the GOBLUPs models could be related to deficiencies in model assumptions, as discussed in the previous section. The ratios of population accuracies were analyzed for fivefold and LBCO CVs. We observed a moderate to high improvement in including phenotypic information in VP in fivefold and LBCO CVs, respectively. This is expected as a higher response for including new information is generally obtained in more restrictive scenarios. No relevant improvement of including MIs or NIR wavelengths was observed in any of the CVs and models; these results were consistent with the observed in the analysis of PA. In contrast, a substantial improvement for including MIs was observed using the ratios of accuracies by Guo et al. . Importantly, Guo et al. used MIs from wort instead of leaf tissue to assess MQ traits, which could be the main explanation for the differences observed. Comparing our study and Guo et al. , we observed that the sampling stage and tissue was relevant for MQ traits, with the highest benefit observed for MI sampled in wort. Nevertheless, sampling MI in the wort may not represent an optimal strategy, as this requires all the malting steps incurring extra cost. Further studies exploring alternative sampling stages and tissues to define what is the best sampling strategy either to obtain MI or NIR wavelengths for the different traits are warranted. No variance inflation ( [12pt]{minimal} $${b}_{w,p}$$ b w , p ) was found for any of the traits in fivefold CV; however, moderate over-dispersion was found for GY in LBCO CV. A possible explanation for this may be related to the fact that in LBCO CV each breeding cycle is assessed in a different year. This could result in an unbalanced scenario to predict traits with large genotype-by-environment interactions as lines assessed in one year are used to predict outcomes for a different breeding cycle tested in a different year, which could lead to issues of variance inflation (Raffo and Jensen ). Additional analyses were performed with GOBLUP-MI by testing the predictive performance of using only significant MIs or removing the first 4000 and last 1000 low-signal MIs, but a similar or a lower performance were observed compared to using all MIs. Further studies are warranted to explore alternative sampling strategies for identifying the optimal stage and tissue and to investigate the impact of violating assumptions in GOBLUP models. The [12pt]{minimal} $${h}^{2}$$ h 2 was investigated using univariate analysis for each of the 30,468 MIs and 141 NIR wavelengths. We observed that [12pt]{minimal} $${}^{2}$$ h ^ 2 of MIs varied from values < 0.01 to 0.93, and we identified that the heritability of 17,677 MIs (58.02%) were statistically significantly different from zero (Fig. ). Guo et al. , analyzed a similar barley dataset from the same breeding company, but with MIs obtained from wort (instead of leaf tissue as in our study). In comparison to our study, both studies had a distribution with a high proportion of low [12pt]{minimal} $${}^{2}$$ h ^ 2 values < 0.1, but in our case, we found a higher proportion of moderate [12pt]{minimal} $${}^{2}$$ h ^ 2 with values from 0.2 to 0.5 and high [12pt]{minimal} $${}^{2}$$ h ^ 2 with [12pt]{minimal} $${}^{2}$$ h ^ 2 > 0.5. In Guo et al. , they found that 35.82% out of 24,018 MIs were significantly different from zero. Despite both studies having defined the significant threshold at 0.01, and having a relevant proportion of significant MIs, the percentage reported by Guo et al. was lower than in our case. The different statistical power to detect significant [12pt]{minimal} $${}^{2}$$ h ^ 2 may influence the results, as in our study greater statistical power is expected due to a larger sample size. Also, Guo et al. included three years of information in the analysis instead of two, which could lead to higher environmental variance and decrease estimates of [12pt]{minimal} $${}^{2}$$ h ^ 2 . The differences between studies might also be explained by having used a different tissue to obtain MIs. Heritability of MIs has also been investigated for other species using different tissues such as fruits and leaves of coffee (Gamboa-Becerra et al. ), milk and blood in Holstein cattle (Wittenburg et al., ; Aliakbari et al. ) and plasma and serum in humans (Frahnow et al. ; Hagenbeek et al. ), and variable [12pt]{minimal} $${}^{2}$$ h ^ 2 for MIs (ranging from 0 to > 0.5 for the different tissues) have been found for the different tissues and species. According to our significance threshold estimated at 0.0123, the 41.8% of [12pt]{minimal} $${}^{2}$$ h ^ 2 of MIs were not significantly different from zero. The large proportion of non-significant [12pt]{minimal} $${}^{2}$$ h ^ 2 are somehow expected since we used all the available NMR spectra, where some regions may include MI with low or no biological signals (similarly observed by Aliakbari et al. , and Guo et al. ). Despite that, using the wide range of MIs may ensure that all potential biological signals can be detected. The [12pt]{minimal} $${}^{2}$$ h ^ 2 of NIR wavelengths in our study ranged from < 0.01 to 0.48 (Fig. ). Rincent et al. investigated the broad-sense heritability of NIR for leaf and grain tissue of wheat and wood tissue in poplar. These authors reported that the broad-sense heritability was highly variable along the spectrum, with peaks above 60% for both tissues. Similar values of broad-sense heritability of NIR wavelengths from grain tissue in in wheat were found in Robert et al. . The articles of Rincent et al. and Robert et al. estimated broad-sense heritability instead of narrow-sense heritability. Our models also allow us to estimate broad-sense heritability by using the total estimated genetic variance (i.e. [12pt]{minimal} $$d(){}_{{g}_{j}}^{2}+{}_{{l}_{j}}^{2}$$ d G σ ^ g j 2 + σ ^ l j 2 ), and the highest peaks of broad-sense heritability were between 40 and 60%. A possible explanation for the higher values obtained in Rincent et al. and Robert et al. may be related to different species involved in the studies. Our result presented on [12pt]{minimal} $${}^{2}$$ h ^ 2 are novel, as this is the first report on [12pt]{minimal} $${}^{2}$$ h ^ 2 of MIs for leaf tissue in barley and NIR wavelengths in whole grain after malting. Population parameters for GY, PC and MQ traits were estimated with GBLUP and GOBLUP-MI models, and for GY and MQ traits with GOBLUP-NIR. The estimated phenotypic variance ( [12pt]{minimal} $${}}}_{{}}^{2}$$ σ ^ P 2 ) was larger for GOBLUP models than for GBLUP. This occurs because the MIs (GOBLUP-MI) and NIR wavelengths (GOBLUP-NIR) capture environmental variance from year-location-trial, that in GBLUP models would be captured by the fixed effects. For GOBLUP-NIR, we observed that the [12pt]{minimal} $${}}}_{{}}^{2}$$ σ ^ P 2 for WV, BG, and EY was higher than the variances of the traits. This issue was also observed, and even more extreme, in Guo et al. for MQ traits using a GOBLUP model with MIs from wort; possible reasons were thoroughly discussed by these authors and were related to possible wrong model assumptions. According to Christensen et al. and Guo et al. , possible model deficiencies could be related to: i) wrongly assuming independence and constant heritabilities of omics effects; note that this assumption is needed when using the joint model for prediction of breeding values, but we observed different [12pt]{minimal} $${}^{2}$$ h ^ 2 for MIs and NIR across the spectra; and ii) assumption of additivity of omics features, which implies that the similarity matrices [12pt]{minimal} $${{}}_{MI}$$ Q MI and [12pt]{minimal} $${{}}_{NIR}$$ Q NIR are matrix cross products of MI and selected principal components of NIR, respectively. Comparing the GBLUP and GOBLUP-MI, a lower direct genomic variance was observed for GOBLUP-MI for GY and PC. The lower direct genomic variance can be explained due to part of the genetic variance being captured by MIs ( [12pt]{minimal} $${}$$ u effect in step 1). A similar and even more extreme trend has been recently observed by Guo et al. for GOBLUP for MQ traits in barley (Guo et al. ). The partition of VCs in GOBLUP-MI revealed that a large proportion of the total variance was captured by MIs for GY and PC (measured by [12pt]{minimal} $${}}}_{{}}^{2}$$ c ^ m 2 > 0.20). This trend was not observed for MQ traits, where [12pt]{minimal} $${}}}_{{}}^{2}$$ c ^ m 2 was lower than 0.05 for all traits. Previous reports using GOBLUP with MIs have found higher values of [12pt]{minimal} $${}}}_{{}}^{2}$$ c ^ m 2 for MQ traits (Guo et al. , ). However, an important difference between these studies and ours is that they have used MIs from wort; and therefore, MIs are more directly related to MQ traits. For GY and PC, we see that a substantial metabolome-mediated heritability was obtained with GOBLUP-MI, resulting in higher [12pt]{minimal} $${}^{2}$$ h ^ 2 than for GBLUP. The differences in [12pt]{minimal} $${}^{2}$$ h ^ 2 between GBLUP and GOBLUP-MI were smaller for MQ traits. Considering the metabolomic mediated heritability ( [12pt]{minimal} $${h}_{M}^{2}$$ h M 2 ) estimated for the different traits, a greater potential to include MIs for prediction can be expected for GY and PC than for MQ traits. Additional VC analyses were performed with GOBLUP-MI that tested the performance of using only significant MIs or removing the first 4000 and last 1000 low-signal MIs, but no relevant differences were observed in VCs compared to using all MIs. For GOBLUP-NIR, a large proportion of total variance was captured by NIR wavelengths for GY, WV, BG, and EY, where [12pt]{minimal} $${}}}_{{}}^{2}$$ c ^ m 2 was higher than in GOBLUP-MI for the four traits. The [12pt]{minimal} $${}}}_{{}}^{2}$$ c ^ m 2 in GOBLUP-NIR was especially large for WV ( [12pt]{minimal} $${}}}_{{}}^{2}$$ c ^ m 2 = 0.81), BG ( [12pt]{minimal} $${}}}_{{}}^{2}$$ c ^ m 2 = 0.63), and EY ( [12pt]{minimal} $${}}}_{{}}^{2}$$ c ^ m 2 = 0.85). The large [12pt]{minimal} $${}}}_{{}}^{2}$$ c ^ m 2 for these traits could be related to a high correlation between NIR wavelengths and grain composition, which is well-established in the literature, and NIR wavelengths are commonly used to predict grain composition and quality in cereals (Dowell et al. ; Osborne ). In our study, NIR wavelengths have been optimized to predict grain protein content, but still, they may be potentially highly correlated to other quality traits than protein content. Similarly to GOBLUP-MI, the direct genomic variance in GOBLUP-NIR was reduced for WV, BG, ET, and FS, but unexpectedly, an opposite trend was observed for GY. A hypothesis for this opposite trend is that the NIR wavelengths may help to improve the partition of environmental, genetic, and genotype-by-environment interaction effects, resulting in more genetic variance captured by the genomic effect for GY. This could also be true for WV, BG, ET, and FS, but it may be more relevant for traits with higher genotype-by-environment interaction as GY (see VCs in supplementary material 3). Conversely, this could also be related to an upward-biased estimate of direct genomic variance due to wrong model assumptions as described above. Comparing the [12pt]{minimal} $${}^{2}$$ h ^ 2 obtained with GOBLUP-NIR and the other models, it was intermediate for GY, highest for BG and EY, and lowest for WV. The GOBLUP-NIR failed to reach convergence of the REML algorithm for FS and WC due to the NIR variance converging towards zero. Several alternatives to get estimates for those traits were utilized, such as trying different starting values for the REML algorithm, using raw or normalized NIR wavelengths instead of the Savitzky-Golay transformation, and using NIR wavelengths of raw grain instead of whole grain after malting, but none of these attempts helped to get VCs for these traits. This could mean that the NIR variance for these traits is not significant and the NIR effect could be excluded from the model, which result in a GBLUP model. Note that principal components were used for NIR wavelengths in GOBLUP-NIR; this strategy is different from the study by Rincent et al. , where they directly used all NIR wavelengths. We believe that no relevant differences in the results should be expected by using all NIR wavelengths or principal components as they explained more than 99% of the variation in NIR wavelengths. However, using the principal components was convenient to facilitate convergence in our study. The predictive ability (PA), ratio of accuracies according to Legarra and Reverter , and variance dispersion of GEBV ( [12pt]{minimal} $${{}}_{w,p}$$ b w , p ) were evaluated in fivefold and LBCO CVs for GOBLUPs and GBLUP models. The fivefold CV allows us to investigate predictions of models in a favorable scenario as close relatives such as parents, siblings, and half-siblings can be shared between RP and VP, increasing genetic connections between RP and VP. The LBCO CV better reflects the practical conditions in a breeding program, where new lines must be predicted from historical information before the phenotypes are obtained. Thus, the genetic relationships between RP and VP in LBCO are much lower compared to fivefold, and lower PAs are expected (Shao ; Kohavi, ). On the other hand, different hypotheses can be tested depending on the CV used. In fivefold CV, the accuracy of new lines included in a breeding cycle that were genotyped but not phenotyped can be tested; this investigation is particularly relevant when genotyping is less expensive than phenotyping. The LBCO, alternatively, allows us to test the accuracy of predicting future performance given that lines are genotyped after single seed descent in F 4 . GBLUP and GOBLUP-MI generally exhibit a similar PA for both CVs, performing better than GOBLUP-NIR for most traits. Despite that, there were some specific cases in each CV where the PA of GOBLUP-MI and GOBLUP-NIR was higher than the PA of GBLUP. In fivefold CV, the GOBLUP-NIR revealed an increase of 2.4% for GY, and in LBCO CV the GOBLUP-MI revealed an increase of 4.6% for PC, although these differences were not statistically significant (critical P-value of 0.01). There were also some cases in which the GBLUP outperformed the PA of GOBLUP-MI (GY in both CVs and EY in LBCO CV) and GOBLUP-NIR (WV, BG, and EY in both CVs), but differences were only statistically significant between GBLUP and GOBLUP-NIR in fivefold CV for BG and EY. This trend of lack of improvement in PA with GBLUP-MI is consistent with results by Guo et al. . The reasons for the lower PA using the GOBLUPs models could be related to deficiencies in model assumptions, as discussed in the previous section. The ratios of population accuracies were analyzed for fivefold and LBCO CVs. We observed a moderate to high improvement in including phenotypic information in VP in fivefold and LBCO CVs, respectively. This is expected as a higher response for including new information is generally obtained in more restrictive scenarios. No relevant improvement of including MIs or NIR wavelengths was observed in any of the CVs and models; these results were consistent with the observed in the analysis of PA. In contrast, a substantial improvement for including MIs was observed using the ratios of accuracies by Guo et al. . Importantly, Guo et al. used MIs from wort instead of leaf tissue to assess MQ traits, which could be the main explanation for the differences observed. Comparing our study and Guo et al. , we observed that the sampling stage and tissue was relevant for MQ traits, with the highest benefit observed for MI sampled in wort. Nevertheless, sampling MI in the wort may not represent an optimal strategy, as this requires all the malting steps incurring extra cost. Further studies exploring alternative sampling stages and tissues to define what is the best sampling strategy either to obtain MI or NIR wavelengths for the different traits are warranted. No variance inflation ( [12pt]{minimal} $${b}_{w,p}$$ b w , p ) was found for any of the traits in fivefold CV; however, moderate over-dispersion was found for GY in LBCO CV. A possible explanation for this may be related to the fact that in LBCO CV each breeding cycle is assessed in a different year. This could result in an unbalanced scenario to predict traits with large genotype-by-environment interactions as lines assessed in one year are used to predict outcomes for a different breeding cycle tested in a different year, which could lead to issues of variance inflation (Raffo and Jensen ). Additional analyses were performed with GOBLUP-MI by testing the predictive performance of using only significant MIs or removing the first 4000 and last 1000 low-signal MIs, but a similar or a lower performance were observed compared to using all MIs. Further studies are warranted to explore alternative sampling strategies for identifying the optimal stage and tissue and to investigate the impact of violating assumptions in GOBLUP models. In this study, we used a commercial barley breeding population to investigate the viability of including metabolomic intensities sampled from early flag leaves, and near-infrared wavelengths, sampled from whole grain after malting, for genomic evaluations of yield and malting quality traits. First, we concluded that a significant proportion of metabolomic intensities and near-infrared wavelengths had medium to high additive genetic variance and heritabilities ( [12pt]{minimal} $${}^{2}$$ h ^ 2 ) and can, therefore, be potentially useful for genetic evaluations. Second, we concluded that multi-omics models including genomic and metabolomics (GOBLUP-MI), or genomic and NIR wavelengths (GOBLUP-NIR), increased the proportion of genetic variance explained by the models for grain yield, grain protein content, malt extract yield, and β-glucan content, compared to a purely genomic model (GBLUP). Third, we assessed genomic and multi-omics models for prediction of breeding values, and we concluded that GBLUP and GOBLUP-MI had a similar prediction accuracy, and performed better than GOBLUP-NIR for most traits. Despite that, the GOBLUP-MI and GOBLUP-NIR models slightly improved accuracy of predicting breeding values compared to the GBLUP for some specific traits, but differences were not statistically significant in a t -test. The different performance of GOBLUPs models across traits might be related to different aspects specific to each trait (e.g. genetic architecture, influence of genotype-by-environment interactions), and the sampling strategy to obtain metabolomic intensities or near-infrared wavelengths. For malting quality traits, sampling leaf tissue revealed worse performance for GOBLUP-MI compared to previous research that had sampled wort. The lack of advantage confered in our case is likely attributed to the sampling strategy and not to the method utilized. Below is the link to the electronic supplementary material. Supplementary file1 (PDF 1714 KB) Supplementary file2 (PDF 320 KB) Supplementary file3 (XLSX 43 KB)
Pharmacogenetic information in Swiss drug labels – a systematic analysis
83ef5bba-90ee-463f-80b7-4360a55e0e94
8292148
Pharmacology[mh]
“One size fits all” is the common strategy of dose-finding studies and consequently, the standard of drug therapy. However, drug therapies may fail, and/or may induce considerable adverse drug reactions (ADRs). The influence of patients’ genetic predispositions on drug response has been studied over decades and therefore, pharmacogenetics (PGx) is gaining attendance in patient-centered research and personalized medicine . For the translation of PGx information into clinical decisions, health care professionals (HCPs) have to consider drug-gene interactions (DGIs) in addition to drug-drug interactions (DDIs). Similar to DDIs, not all DGIs require an intervention. The level of actionability depends on both the genetic variant of an enzyme and the metabolized substrate. Almost 100% of the population carry at least one actionable genetic variant . Consequently, it is expected that implementation of PGx into clinical decisions might be a strategy to reduce the substantial burden of ADRs , still representing a major concern in health care . Considering the high number of drug-relevant genes and the multitude of available substances on the market, genetic variability potentially affects a large number of patients. In this study, we focused on the drug label (DL), one of the first sources for HCPs to check for information on a drug. The Pharmacogenomics Knowledgebase (PharmGKB) is an expert curated knowledgebase which collects and disseminates information on DGIs. The website ( https://www.pharmgkb.org ) is publicly available and supports researchers and clinicians in the interpretation of human genetic variation in relation to drug response. Information available includes prescribing information from clinical guidelines, curated pathways, pharmacogene summaries, annotations on associations between genetic variants and drug responses as reported in the literature, and DLs containing PGx information . The PharmGKB has defined four PGx levels (see Table ) to classify PGx information mentioned in DLs according to the potential for action . Several groups have compared the information on PGx in DLs authorized by different agencies . In the United States, the Food and Drug Administration (FDA) approves the DLs, and provides a table of pharmacogenomic biomarkers in DLs . In Switzerland, Swissmedic approves all Swiss DLs before they become publicly available ( www.swissmedicinfo.ch ). The Swiss DL is organized in different sections with defined headings; however, no section is dedicated to PGx. For the DLs of Switzerland, no overview or comparison of PGx information in the DLs exists at this time. By analyzing the DLs, we will get an overview of the current state of PGx information helping us to identify inconsistencies and to suggest potential improvement for the future. Accordingly, it was the aim of this project to provide a systematic analysis of the Swiss DL sections reporting PGx-relevant information on metabolizing enzymes and transporters as well as HLA risk alleles, to evaluate the instructions provided to HCPs on PGx information and finally, to compare the respective PGx level with those provided in DLs authorized by agencies of other countries. Natural language processing (NLP) We applied natural language processing (NLP). Terms used to search for PGx information within the DLs were gathered based on preliminary analysis of DLs (in German language), literature , and the AmiKoWeb website ( https://amiko.oddb.org ). The selected search terms to identify specific genes were related to genetic polymorphisms (defined as genetic variants with a prevalence of more than 1% in a population ) known to be involved in drug metabolism. An expert group (CJ, KS, KH, HMzS) selected 25 eligible word stems corresponding to 245 different search terms for the NLP (for details, see Supplementary Fig. ). We used AmiKoWeb for the full-text search on 4th February 2019. All 4306 Swiss DLs available in German describing the 15,367 products on the Swiss market (including different dosages and package sizes) were screened for PGx information by NLP. The search identified 5979 hit sentences (corresponding to 606 chemical substances and 1399 different brand drugs) (Fig. ). Supplementary Fig. gives an overview of the primary NLP search. Evaluation of the identified sentences for PGx-relevance We examined all hit sentences for PGx-relevance. Any information related to a genetic polymorphism of an enzyme known to be involved in drug metabolism or drug transport (pharmacokinetics) and any information on HLA risk types was considered PGx-relevant. We consequently excluded genetic mutations (prevalence <1%), disease-related gene defects (e.g., genetic hypercholesterolemia), disease-related chromosomal abnormalities (e.g., Philadelphia chromosome), nonhuman genetic factors (e.g., genotype of viruses), genes encoding proteins used for treatment selection (e.g., in oncology), and biomarkers related to a drug other than the referenced drug (e.g., in the case of an interaction). Classification of the PGx-relevant sentences The classification of the sentences in the identified DLs was based on the PGx levels proposed by PharmGKB (Table ). If one sentence in one DL resulted in a higher PGx level compared to other sentences in the same DL, the highest annotated PGx level was considered in the analysis. After the first annotation of PGx levels, 10% of the sentences were reannotated to evaluate consistency and 5% were independently annotated by a second person to test reliability. After each step, the expert group was consulted. We translated the PGx-relevant sentences into English and submitted them to PharmGKB. Final discrepancies were solved in collaboration with experts of the PharmGKB group (Fig. ). We checked the PGx-relevant sentences of the same chemical substances (indicated by the ATC code level 1) and in case of multiple brand products with the same text in the DL, we defined one reference DL (refDL); either we selected the original product (brand name) or we arbitrarily choose the first generic drug in the list. We refer to Supplementary Table for details on the refDLs. We analyzed the PGx-relevant sentences and the refDLs by the section where the PGx information was located, the anatomic groups (indicated by the ATC code level 1 of the corresponding substance), and the biomarker mentioned in the concerned PGx-relevant sentence, respectively. Annotations entered into the PharmGKB knowledgebase PharmGKB applied their process of quality control to the translated DLs, annotated and entered them into the PharmGKB knowledgebase (for details see Supplementary Fig. ). Comparison of PGx levels with those of other regulatory authorities We conducted a comparative analysis of the annotated PGx levels available on PharmGKB of selected DLs with US Food and Drug Administration (FDA), European Medicines Agency (EMA), Health Canada/Santé Canada (HCSC), and Pharmaceuticals and Medical Devices Agency (PMDA), Japan. For the quantitative analysis, the PGx level was coded with points, with an increasing number of points for the severity of the PGx level, resulting in 1 point for “informative PGx” and 4 points for “testing required.” We applied natural language processing (NLP). Terms used to search for PGx information within the DLs were gathered based on preliminary analysis of DLs (in German language), literature , and the AmiKoWeb website ( https://amiko.oddb.org ). The selected search terms to identify specific genes were related to genetic polymorphisms (defined as genetic variants with a prevalence of more than 1% in a population ) known to be involved in drug metabolism. An expert group (CJ, KS, KH, HMzS) selected 25 eligible word stems corresponding to 245 different search terms for the NLP (for details, see Supplementary Fig. ). We used AmiKoWeb for the full-text search on 4th February 2019. All 4306 Swiss DLs available in German describing the 15,367 products on the Swiss market (including different dosages and package sizes) were screened for PGx information by NLP. The search identified 5979 hit sentences (corresponding to 606 chemical substances and 1399 different brand drugs) (Fig. ). Supplementary Fig. gives an overview of the primary NLP search. We examined all hit sentences for PGx-relevance. Any information related to a genetic polymorphism of an enzyme known to be involved in drug metabolism or drug transport (pharmacokinetics) and any information on HLA risk types was considered PGx-relevant. We consequently excluded genetic mutations (prevalence <1%), disease-related gene defects (e.g., genetic hypercholesterolemia), disease-related chromosomal abnormalities (e.g., Philadelphia chromosome), nonhuman genetic factors (e.g., genotype of viruses), genes encoding proteins used for treatment selection (e.g., in oncology), and biomarkers related to a drug other than the referenced drug (e.g., in the case of an interaction). The classification of the sentences in the identified DLs was based on the PGx levels proposed by PharmGKB (Table ). If one sentence in one DL resulted in a higher PGx level compared to other sentences in the same DL, the highest annotated PGx level was considered in the analysis. After the first annotation of PGx levels, 10% of the sentences were reannotated to evaluate consistency and 5% were independently annotated by a second person to test reliability. After each step, the expert group was consulted. We translated the PGx-relevant sentences into English and submitted them to PharmGKB. Final discrepancies were solved in collaboration with experts of the PharmGKB group (Fig. ). We checked the PGx-relevant sentences of the same chemical substances (indicated by the ATC code level 1) and in case of multiple brand products with the same text in the DL, we defined one reference DL (refDL); either we selected the original product (brand name) or we arbitrarily choose the first generic drug in the list. We refer to Supplementary Table for details on the refDLs. We analyzed the PGx-relevant sentences and the refDLs by the section where the PGx information was located, the anatomic groups (indicated by the ATC code level 1 of the corresponding substance), and the biomarker mentioned in the concerned PGx-relevant sentence, respectively. PharmGKB applied their process of quality control to the translated DLs, annotated and entered them into the PharmGKB knowledgebase (for details see Supplementary Fig. ). We conducted a comparative analysis of the annotated PGx levels available on PharmGKB of selected DLs with US Food and Drug Administration (FDA), European Medicines Agency (EMA), Health Canada/Santé Canada (HCSC), and Pharmaceuticals and Medical Devices Agency (PMDA), Japan. For the quantitative analysis, the PGx level was coded with points, with an increasing number of points for the severity of the PGx level, resulting in 1 point for “informative PGx” and 4 points for “testing required.” PGx-relevant information in Swiss Drug Labels searched by NLP From the 5979 identified hit sentences identified by the NLP search, 2564 sentences were classified as PGx-relevant. In total, 3415 sentences were excluded due to the lack of PGx-relevance. Most of the PGx-relevant sentences were part of the section on pharmacokinetics ( n = 1110), followed by the precautionary measures section ( n = 839). The other PGx-relevant sentences were distributed homogenously in the other sections (dosage/application, contraindications, interactions, adverse effects, or properties/effects). A small number of PGx-relevant sentences appeared in the sections on indication ( n = 3), pregnancy ( n = 7), or overdose ( n = 10) (Fig. ). No PGx-relevant information appeared in any of the ten remaining sections, such as ability to drive or operate machines, preclinical data, or other hints. Most of the PGx-relevant sentences were found in the ATC group “N: Nervous system” ( n = 793), followed by “C: Cardiovascular system” ( n = 408), and “B: Blood and blood building systems” ( n = 352). The lowest number of PGx-relevant sentences appeared in the ATC group “V: Various” ( n = 3). No PGx-relevant sentences were discovered in the ATC group “H: Systemic hormonal preparations” (Fig. ). The PGx biomarker most frequently mentioned was the drug metabolizing enzyme CYP2D6 ( n = 679), followed by CYP2C19 ( n = 499). The drug transporter SLCO1B1 (OATP1B1), the enzyme glucose 6-phosphate dehydrogenase (G6PD), and the HLA-allele HLA-A*3101 were named in n = 254, n = 277, and n = 284 sentences, respectively. Overall, 76 PGx-relevant sentences referred to other biomarkers, e.g., IL28B. However, in 39 cases, PGx information was provided without mentioning any specific biomarker (Fig. ). Analysis of the reference drug labels (refDLs) Based on the PGx-relevant sentences, we defined 167 refDLs. Almost in all cases (166 of 167) the DL of the generics contained the same text as the original product. For the ATC code L01BC02 (fluorouracil), we defined two refDLs, because the texts of the DLs of Efudix ® and Fluorouracil Labatec ® differed in information. Of the defined refDLs, there were 17 combination products where PGx information was the same as for the mono products of each component. Therefore, these refDLs were not annotated separately. Moreover, there were four products (carbamazepine, escitalopram, fluorouracil, and codeine/acetaminophen) addressing more than one biomarker in the PGx-relevant sentences with different PGx levels. The PGx information of the refDLs was identified in 10 out of 20 different sections in the DL. One example, where PGx-relevant information is given in multiple sections namely “indication,” “precautionary measures,” “contraindications,” and “properties/effects” is abacavir (Ziagen ® ); for further examples see Fig. (for details on all substances see Supplementary Table ). Most of the refDLs ( n = 92 substances, 55%) were assigned to PGx level 3 “actionable PGx” and PGx level 4 “informative PGx” ( n = 26, 16%). Only 9 (5%) or 4 (2%) DLs were assigned to PGx level 1 “testing required,” or PGx level 2 “testing recommended,” respectively. In total, 19 DLs (11%) could not be classified using the original definition of the PGx levels, as the information given did not meet the criteria proposed by PharmGKB, and 17 (10%) refDLs on combination products reported the same information as the mono product (Fig. ). Summarizing the PGx level annotated refDLs in anatomic groups (ATC code of level 1), revealed that PGx level 3 appeared most frequently (Fig. ). The anatomic group “J: general anti-infectives for systemic use” represented an exception as it contained abacavir (including three combination products), all labeled with PGx level 1. The same PGx level was attributed to carbamazepine, oxcarbazepine, codeine, and tetrabenazine as well as fluorouracil in the anatomic groups “N: Nervous system” and “L: antineoplastic and immunomodulating agents”, respectively (Fig. ). Looking at the specific biomarkers in the refDLs (Fig. ), CYP2D6 was most frequently mentioned ( n = 52), followed by G6PD ( n = 35). In total, 12 refDLs informed on PGx without mentioning a specific biomarker. Many refDLs stated two biomarkers, e.g., G6PD and CYP2D6 in Co-Dafalgan ® , one accounting for acetaminophen and the other for codeine. Overall, biomarkers in drug-metabolizing enzymes predominated (Fig. ). Annotations entered into the PharmGKB knowledgebase The extracts of the Swiss DLs were translated and entered into the PharmGKB knowledgebase on 22.10.2019 ( https://www.pharmgkb.org/labelAnnotations ) and resulted in 131 annotations (Fig. ). In addition, the collaboration with PharmGKB led to a new definition for PGx level 4 “informative PGx.” The original definition of this category was “label mentioning a gene or protein involved in the metabolism or pharmacodynamics of the drug, with no information to suggest that variation in these genes/proteins leads to changes in drug response.” Due to difficulties in our primary analysis, we started a discourse with PharmGKB, which finally resulted in an adaptation of the definition of PGx level 4 (published on 08/07/2019). Comparison of PGx levels with those of other regulatory authorities We compared the assigned PGx levels of the 126 uploaded DLs of Swissmedic with those authorized by other regulatory authorities, and observed that the majority was rated as “actionable PGx”. This is also indicated, when determining a mean after translating the different categories into points. Here, the mean ± SD was 1.984 ± 0.693 ( n = 126), 2.053 ± 0.831 ( n = 76), 2.100 ± 0.8847 ( n = 30), 2.178 ± 0.777 ( n = 45), and 2.077 ± 0.688 ( n = 26) points for Swissmedic, FDA, EMA, HCSC, and PMDA, respectively. However, the comparison also revealed that the PGx levels assigned (Fig. ) and the number of DLs reporting PGx-relevant information (Fig. ) were different. According to PharmGKB, “test required” was assigned to eight Swissmedic DLs, one EMA DL, and three FDA DLs. No PGx levels for these compounds were assigned to the DLs of HCSC or PMDA. From the FDA, eight DLs are rated as “test required”; while four of these DLs (gefitinib, rasburicase, tamoxifen, and ibrutinib) were rated differently in the Swissmedic DLs. Looking at the 126 DLs under consideration, all five regulatory authorities had a majority of DLs rated as “actionable PGx.” However, only the FDA have about the same number of DLs with “actionable PGx” as Swissmedic. From the 5979 identified hit sentences identified by the NLP search, 2564 sentences were classified as PGx-relevant. In total, 3415 sentences were excluded due to the lack of PGx-relevance. Most of the PGx-relevant sentences were part of the section on pharmacokinetics ( n = 1110), followed by the precautionary measures section ( n = 839). The other PGx-relevant sentences were distributed homogenously in the other sections (dosage/application, contraindications, interactions, adverse effects, or properties/effects). A small number of PGx-relevant sentences appeared in the sections on indication ( n = 3), pregnancy ( n = 7), or overdose ( n = 10) (Fig. ). No PGx-relevant information appeared in any of the ten remaining sections, such as ability to drive or operate machines, preclinical data, or other hints. Most of the PGx-relevant sentences were found in the ATC group “N: Nervous system” ( n = 793), followed by “C: Cardiovascular system” ( n = 408), and “B: Blood and blood building systems” ( n = 352). The lowest number of PGx-relevant sentences appeared in the ATC group “V: Various” ( n = 3). No PGx-relevant sentences were discovered in the ATC group “H: Systemic hormonal preparations” (Fig. ). The PGx biomarker most frequently mentioned was the drug metabolizing enzyme CYP2D6 ( n = 679), followed by CYP2C19 ( n = 499). The drug transporter SLCO1B1 (OATP1B1), the enzyme glucose 6-phosphate dehydrogenase (G6PD), and the HLA-allele HLA-A*3101 were named in n = 254, n = 277, and n = 284 sentences, respectively. Overall, 76 PGx-relevant sentences referred to other biomarkers, e.g., IL28B. However, in 39 cases, PGx information was provided without mentioning any specific biomarker (Fig. ). Based on the PGx-relevant sentences, we defined 167 refDLs. Almost in all cases (166 of 167) the DL of the generics contained the same text as the original product. For the ATC code L01BC02 (fluorouracil), we defined two refDLs, because the texts of the DLs of Efudix ® and Fluorouracil Labatec ® differed in information. Of the defined refDLs, there were 17 combination products where PGx information was the same as for the mono products of each component. Therefore, these refDLs were not annotated separately. Moreover, there were four products (carbamazepine, escitalopram, fluorouracil, and codeine/acetaminophen) addressing more than one biomarker in the PGx-relevant sentences with different PGx levels. The PGx information of the refDLs was identified in 10 out of 20 different sections in the DL. One example, where PGx-relevant information is given in multiple sections namely “indication,” “precautionary measures,” “contraindications,” and “properties/effects” is abacavir (Ziagen ® ); for further examples see Fig. (for details on all substances see Supplementary Table ). Most of the refDLs ( n = 92 substances, 55%) were assigned to PGx level 3 “actionable PGx” and PGx level 4 “informative PGx” ( n = 26, 16%). Only 9 (5%) or 4 (2%) DLs were assigned to PGx level 1 “testing required,” or PGx level 2 “testing recommended,” respectively. In total, 19 DLs (11%) could not be classified using the original definition of the PGx levels, as the information given did not meet the criteria proposed by PharmGKB, and 17 (10%) refDLs on combination products reported the same information as the mono product (Fig. ). Summarizing the PGx level annotated refDLs in anatomic groups (ATC code of level 1), revealed that PGx level 3 appeared most frequently (Fig. ). The anatomic group “J: general anti-infectives for systemic use” represented an exception as it contained abacavir (including three combination products), all labeled with PGx level 1. The same PGx level was attributed to carbamazepine, oxcarbazepine, codeine, and tetrabenazine as well as fluorouracil in the anatomic groups “N: Nervous system” and “L: antineoplastic and immunomodulating agents”, respectively (Fig. ). Looking at the specific biomarkers in the refDLs (Fig. ), CYP2D6 was most frequently mentioned ( n = 52), followed by G6PD ( n = 35). In total, 12 refDLs informed on PGx without mentioning a specific biomarker. Many refDLs stated two biomarkers, e.g., G6PD and CYP2D6 in Co-Dafalgan ® , one accounting for acetaminophen and the other for codeine. Overall, biomarkers in drug-metabolizing enzymes predominated (Fig. ). The extracts of the Swiss DLs were translated and entered into the PharmGKB knowledgebase on 22.10.2019 ( https://www.pharmgkb.org/labelAnnotations ) and resulted in 131 annotations (Fig. ). In addition, the collaboration with PharmGKB led to a new definition for PGx level 4 “informative PGx.” The original definition of this category was “label mentioning a gene or protein involved in the metabolism or pharmacodynamics of the drug, with no information to suggest that variation in these genes/proteins leads to changes in drug response.” Due to difficulties in our primary analysis, we started a discourse with PharmGKB, which finally resulted in an adaptation of the definition of PGx level 4 (published on 08/07/2019). We compared the assigned PGx levels of the 126 uploaded DLs of Swissmedic with those authorized by other regulatory authorities, and observed that the majority was rated as “actionable PGx”. This is also indicated, when determining a mean after translating the different categories into points. Here, the mean ± SD was 1.984 ± 0.693 ( n = 126), 2.053 ± 0.831 ( n = 76), 2.100 ± 0.8847 ( n = 30), 2.178 ± 0.777 ( n = 45), and 2.077 ± 0.688 ( n = 26) points for Swissmedic, FDA, EMA, HCSC, and PMDA, respectively. However, the comparison also revealed that the PGx levels assigned (Fig. ) and the number of DLs reporting PGx-relevant information (Fig. ) were different. According to PharmGKB, “test required” was assigned to eight Swissmedic DLs, one EMA DL, and three FDA DLs. No PGx levels for these compounds were assigned to the DLs of HCSC or PMDA. From the FDA, eight DLs are rated as “test required”; while four of these DLs (gefitinib, rasburicase, tamoxifen, and ibrutinib) were rated differently in the Swissmedic DLs. Looking at the 126 DLs under consideration, all five regulatory authorities had a majority of DLs rated as “actionable PGx.” However, only the FDA have about the same number of DLs with “actionable PGx” as Swissmedic. To our knowledge, this is the first NLP-based extraction of information related to PGx from the Swiss DLs. We focused genes involved in drug metabolism and transport (pharmacokinetics) and information on HLA risk alleles. We extracted 2564 PGx-relevant sentences, which corresponded to 167 chemical substances. Our analysis showed that 9.47% of all Swiss DLs (167 out of 1763 different ATC codes by 31st January 2019 ) mentioned PGx-relevant information. Most of this PGx information (55%) was classified as “actionable PGx”. We identified the pharmacokinetics section as the prevailing section reporting on PGx. However, this particular section is – not only in the Swiss DLs, but also in those approved by other agencies – one of the last sections in a DL . Therefore, it may be speculated that there is a risk that PGx information could be overseen by the HCPs. For some drugs coded with PGx level 1, the PGx-relevant information was located within the section on precautionary measures, which reports on genetic polymorphisms known to be associated with ADRs (especially in the case of HLA-associated ADRs, e.g., carbamazepine). Our findings are in line with those by Ehmann et al. reporting that the pharmacokinetic and the precautionary measures section are most likely to state PGx information in DLs approved by the EMA. Other sections such as indications, dosage/use and contraindications rarely provide PGx information. In contrast to the study of Shimazawa et al. , we did not prioritize one section per DL, but we analyzed all sections mentioning PGx information. In accordance with findings from other countries , CYP2D6 was the most frequently mentioned biomarker in the Swiss DLs. This cytochrome P450 enzyme is known for its genetic variability with about 100 different alleles resulting in the phenotypes of poor, intermediate, normal, and ultra-rapid metabolizer (UM) with a prevalence of 0.4–5.4%, 0.4–11%, 67–90%, and 1–21%, respectively . Moreover, CYP2D6 is known to be involved in the metabolism of a wide range of commonly used drugs including SSRIs , opioids , and tamoxifen . The second most cited biomarker in the Swiss DLs was CYP2C19. This enzyme also affects a large number of drugs including SSRIs , opioids , and in particular the bioactivation of clopidogrel . However, none of the Swiss DL contained the biomarker ABCB1, although the Swiss guideline on the treatment of unipolar depressive episodes recommends to test for selected genetic variants of ABCB1 (P-Glycoprotein) in patients taking antidepressants . Although ABCB1 was not mentioned in the Swiss DLs, the anatomic group N (nervous system) dominated when analyzing PGx levels per anatomic group. This group contains antiepileptics (carbamazepine , oxacarbazepine , phenytoin ), antidepressants such as SSRIs , or analgesics such as opioids . The anatomic group N relates to various drugs where treatment is associated with more difficulties (e.g., therapy failure) compared to therapies of other anatomic groups. Indeed, Bschor et al. and Muller et al. assume that psychiatric patients would likely benefit from a PGx test prior to the therapy in order to avoid ADRs or therapy failure. The PGx-relevant information in anatomic group C mostly referred to statins (e.g. fluvastatine ) and beta-blockers (e.g., metoprolol ). Almost all hits in the anatomic group B were related to clopidogrel, which is well-studied for the influence of genetic variability . We identified nine (5%) refDLs with statements categorized as PGx level 1 and four (2%) refDLs as PGx level 2. For these drugs there is convincing evidence for the clinical benefit of PGx testing prior to treatment initiation. This may be explained by the severity of the potential ADRs . For HCPs, the instruction in these DLs is clear. Accordingly, DLs containing information with PGx level 1 or 2 are most evident to handle, as clear recommendations on therapeutic consequences are given. The majority (55%) of the refDLs were classified as PGx level 3. They mention the influence of a genetic variant on drug efficacy or safety without recommending genetic testing. Here, the question is, how are HCPs supposed to handle this information. Should HCPs inform the patient, or simply take note of the information in case of ADRs or nonresponse? The predominance of PGx level 3 illustrates the insecurity which still dominates in the field of PGx. PGx level 4 was the second most applied PGx level (16%) for the Swiss DLs. The original definition of this category was adapted in a discourse with PharmGKB. Overall, the presentation of PGx information is very heterogeneous; not only in terms of localization in the DL but also leading to different PGx levels and various associated recommendations. The information on PGx is often not precise and the presentation lacks a predefined structure. Similar findings have also been reported by Ehmann et al. (EMA) and Shimazawa & Ikeda (US and Japan) . By entering the extracts of 126 Swiss DLs on the PharmGKB website, we were able not only to make this information publically accessible but also comparable to the information approved by regulatory agencies of four other countries. The individual comparisons of the Swiss DLs with the DLs of the four different regulatory authorities listed in PharmGKB revealed a large heterogeneity not only in number of compounds with PGx information, but also in terms of assigned PGx levels for the available PGx information. Accordingly, there is a clear need for a standardized presentation with a well-defined structure. Based on our analysis, there is a tendency toward more PGx testing (PGx level 1 and 2) in the Swiss DLs, compared to FDA or EMA. However, it has to be taken into account that the DLs of the EMA represent rather a general guidance, still enabling differences in the recommendations in national DLs. It has been recommended during the revision process of this manuscript that it should be considered to compare the Swiss DLs to the DLs published by the regulatory agencies of selected European countries. One country that would be suitable for such a comparison are the Netherlands, where guidelines on PGx are available and which appears to have an initiative for PGx implementation with the Dutch pharmacogenetic working group . However, their DLs are only available in Dutch . For Germany, we found a list of drugs published by VFA (Verein der Pharma-Forschenden) with all substances which require or recommend PGx testing (analog to PGx level 1 and 2). In the context of pharmacogenotyping of genes relevant for pharmacokinetics, we are able to compare seven substances (see Supplementary Table ). Shekhani et al. analyzed the concordance of the DLs of regulatory agencies with guidelines provided by CPIC/DPWG and revealed that out of 54 drugs with an actionable gene–drug interaction in the CPIC and DPWG guidelines, only 50% of the agencies described actionable PGx information in the DLs and they were in agreement in only 18% of the cases. We agree with Tan-Koi et al. who suggested after a cross-sectional study of PGx associations in six different countries that there should be an international consensus for PGx presentation in DLs. Also Ehmann et al. stated that the number of DLs mentioning PGx is steadily increasing and that a new legislation is necessary to support HCPs in the application of PGx information. In contrast to the FDA using subheadings on PGx, the current structure of the Swiss DLs does not support the incorporation of standardized PGx information. Limitations We have to mention, that we searched for PGx-relevance with word stems concerning pharmacokinetics, thereby excluding information on pharmacodynamics. Our major concern was the inter-individual variability in drug metabolism, which is known to affect a great large number of patients in daily care. However, focusing on pharmacokinetics, we missed information on most oncological drugs, where genotyping is part of compound selection. In contrast to most previous studies analyzing the DLs for PGx information by reviewers reading the DLs , we applied an automated search by NLP. Of the total hit sentences identified by NLP, 43% contained PGx-relevant information. We consider NLP as a strength, even though we are aware of the effort which was necessary for the semantic standardization. As no predefined standardization for the presentation of PGx information in the Swiss DLs exists, the definition of word stems was challenging. In order to facilitate accessibility of DLs for NLP, standardization would be necessary. During the attribution of PGx levels, we found DLs reporting on the same chemical substance, but stating different information. One reason for these discrepancies might be the different date of market admission. Moreover, a few DLs involved two or more biomarkers resulting in two PGx levels. Finally, some reference DLs inform on the influence of PGx on the drug’s efficacy or safety without mentioning a specific biomarker. These particular DLs were excluded from publishing on the website by PharmGKB, as they do not provide usable information for the HCP. We have to mention, that we searched for PGx-relevance with word stems concerning pharmacokinetics, thereby excluding information on pharmacodynamics. Our major concern was the inter-individual variability in drug metabolism, which is known to affect a great large number of patients in daily care. However, focusing on pharmacokinetics, we missed information on most oncological drugs, where genotyping is part of compound selection. In contrast to most previous studies analyzing the DLs for PGx information by reviewers reading the DLs , we applied an automated search by NLP. Of the total hit sentences identified by NLP, 43% contained PGx-relevant information. We consider NLP as a strength, even though we are aware of the effort which was necessary for the semantic standardization. As no predefined standardization for the presentation of PGx information in the Swiss DLs exists, the definition of word stems was challenging. In order to facilitate accessibility of DLs for NLP, standardization would be necessary. During the attribution of PGx levels, we found DLs reporting on the same chemical substance, but stating different information. One reason for these discrepancies might be the different date of market admission. Moreover, a few DLs involved two or more biomarkers resulting in two PGx levels. Finally, some reference DLs inform on the influence of PGx on the drug’s efficacy or safety without mentioning a specific biomarker. These particular DLs were excluded from publishing on the website by PharmGKB, as they do not provide usable information for the HCP. The analysis of PGx information provided in Swiss DLs revealed large heterogeneity. PGx information varies not only in wording used to describe the information but also in the section, where the information appears. In addition, the instructions for clinical practice are rather vague. In summary, this makes the identification and the interpretation of PGx information difficult for HCPs. However, the predominance of PGx level 3 “actionable PGx” demonstrates that numerous actionable DGIs are existing, which could be considered in an optimized drug therapy. For their decision-making and patient counseling, HCPs depend on a supportive DL. Therefore, a specific section dedicated to PGx for the efficient identification of PGx information is favorable. Here, standardized language and well-structured, consistent presentation of PGx information within the DL would be required to facilitate accessibility (e.g., to NLP and then in a further step to clinical decision support systems). Finally, instructions on PGx testing should become more implicit, to support HCPs in personalizing drug therapies and tailoring pharmacotherapy. Legends of Supplementary Material Supplementary Figure 1 Supplementary Figure 2 Supplementary Figure 3 Supplementary Table 1 Supplementary Table 2
Medical and legal aspects of telemedicine in ophthalmology
1f260ca9-5542-4b05-830c-79f2b614ae69
6820487
Ophthalmology[mh]
Telemedicine provides welfare services by using the ICT when the health professional and the patient are distant. Telemedicine allow the reliable transmission of medical information (texts, sounds, images, etc.) for the prevention, diagnosis, treatment and monitoring of patients . Telemedicine is useful for doctors and patients, particularly in rural areas or in developing countries without good health infrastructures . It can play an important role in the development of health care all over the world. Telemedicine is helpful in remote group collaboration between health professionals from different countries because it consists of real time communication between the participants. In this way, professionals in distant positions can communicate with experts on a particular issue. This behavior is part of good medical practice [ - ]. Moreover, the patients seeking a second opinion from distant doctors may use telemedicine. Telemedicine may also be part of preventive medicine by giving patients information on their health conditions and monitoring chronic diseases such as diabetic retinopathy, glaucoma, macular age-related degeneration, etc. By means of telemedicine, we do telemonitoring of patients thus avoiding long waiting lists and reducing hospitalizations. The electronic prescription in telemedicine avoids errors, such as mistakes in the tax code, the patient’s age, or the dosage, consequently to incomprehensible calligraphy of the doctor . The improvement of healthcare is the real goal of telemedicine. Notwithstanding all the advantages, there is no telemedicine in Europe. The difficulties of application are the costs of telemedicine services, the difficult technical interoperability across countries, the lack of defined ethical standards and regulations . Telemedicine has multiple applications in ophthalmology because it needs visual images for the diagnosis, therapy, and follow-up of the diseases. Ophthalmology has quickly adapted to the cybernetic world and uses the telemedicine technologic system to access advanced healthcare. Its effective use has shown economic and health benefits . For example, telemedicine may improve the quality of health care in two important ophthalmological diseases, such as diabetic retinopathy (DR) and retinopathy of prematurity (ROP). DR is a cause of blindness among working age adults. In particular, the telemonitoring of DR is helpful for clinicians. Telemedicine programs provide screening for DR patients in primary assistance . Consequently, the spreading of the ophthalmological programs will promote the access to retinal screening or eye tests with the appropriate frequency . Image-based diagnosis and telemedicine are reliable, accurate, and cost-effective for ROP in premature babies . The imaging technology in telemedicine is good for the documentation of the retinal fundus thus also improving ophthalmologic care. Telemedicine in ophthalmology is useful for intraocular pressure monitoring and macular disease monitoring. Finally, remote consultations and examinations, instead of face-to-face, are gaining ground thanks to two-way communication technologies in emerging clinical institutions . According to the articles 56 and 57 of the Treaty on the Functioning of the European Union (TFEU), telemedicine is a free service. According to the the European law, telemedicine is a health service and an information service (an electronical service provided for a fee at a distance). Therefore, both the regulations relating to health care and the regulations relating to information society services are applicable . As far as information and telecommunication are concerned, we need to consider the following documents: Directive 95/46/European Union (EU) , General Data Protection Regulation (GDPR) , Directive 98/34/EU, Information Society Services Directive , Directive 2000/31/EU, Directive on electronic commerce , Directive 2002/58/EU and Directive on privacy and electronic communication . Directive 2011/24/EU, the so-called cross-border directive, regulates health services . The EU has increased the presence of telemedicine in Europe, as a standard medical service for European patients protected by the social security system . The EU’s efforts are also quite economically significant (over € 500 million in research funding on eHealth instruments). However, the value of telemedicine does not hide its complex difficulties. For example, when health workers from different countries collaborate, there is no consensus on language to use for data recording in the patient’s medical record or who is responsible for technical training (the so-called e-literacy) of health professionals. All electronic health records (EHRs) need to integrate into a global eHealth infrastructure. Despite several working documents, we lack a set of rules on telemedicine in Europe. Most of the regulatory positions still depend on Member States with the best skills in health care. The approach to telemedicine varies in European countries. Even at national level, legal voids threaten the practice of telemedicine, leaving unprotected patients and health workers . Published in the European Official Journal on May 4 2016, it entered into force on May 24 2016, but its implementation took place after two years, therefore starting from May 25 2018. Regulation 2016/679 represents the European legislation about data protection. Its purpose is the definitive harmonization of the regulation on the protection of personal data. With the Lisbon treaty, the protection of personal data is a fundamental right in EU. By increasing citizens’ trust in the digital society thanks to the more stringent protection, the regulation is functional to the digital development of the EU and safeguards the free movement of personal data. With the European regulation, we move from a proprietary vision of the datum (not treatable without consent) to a vision of control of the data enhancing the free circulation. There is the fundamental right of patients to the confidentiality of their medical records. Patient privacy is undeniable unless it is revoked (i.e. informed, non-coercive) when it neutralizes the public interest. The information disclosed should be limited to such information or part of the medical record. Telemedicine creates problems consequent to the involvement of non-clinical staff in teleconsultations, and the vulnerability of transmission lines to security breaches. Healthcare professionals and public health organizations carry out the processing of health data according to a specific discipline . The confidentiality needs to guarantee for the health data of citizens in health facilities. Health data about intimate details of the patients are subject to a general prohibition of dissemination, as well as enhanced protection (article 4 of GDPR) . Health data is all that personal information suitable to reveal a person's state of health and body and mental conditions. Genetic data and photographs taken for surgical interventions or controls are also health data. Furthermore, as established by the Court of Cassation (Civil Cassation, United Sections, judgment 12.27.2017, number 30981), sensitive data suitable to reveal the state of health must be treated with organizational methods to protect them, such as encryption, thus making the data subject unidentifiable. According to the European regulation health data can be used only for purposes related to health (treatment purposes), for the supervision of the National Health System and for research in the public interest. These purposes allow the possibility to introduce special conditions or additional limits for treatment. In this sense, the Italian legislator has provided, with the new Privacy Code, further measures to protect health data set by the national control authority and reviewed every two years. The article 9 (letter h) of the GDPR does not consent data processing for purposes of preventive medicine or occupational medicine, such as the assessment of the employee's ability to work, diagnosis in accordance with Union or Member State law or health professional contracts. However, the rule allows the Member States to introduce limitations with regard to the processing of genetic data, biometric data, or health data. The Italian legislator, with the decree to update the Privacy Code, has introduced the possibility that the Guarantor imposes specific guarantee measures (additional to the normal security measures) for the treatment of health data . The directive 680 of 2016 is similar to the GDPR. Many of the rules contained are equivalent to those of the new European regulation on privacy and personal data. They have a specific scope of application and concern the processing carried out by the competent authorities for prevention, investigation, public security, etc. This directive expresses the need to serve the data for the time to achieve the purposes and to verify the need for storage or deletion after deadline. The appointment of the data protection officer is also obligatory for the judicial authority because of the assistance that this figure can provide in the complex treatment of sensitive data. The text provides administrative sanctions against violations concerning the methods of processing and introduces penal sanctions for illegitimate purposes in the treatment . Before proceeding with data collection, it is necessary to inform the patient. The document about informed consent indicates the subject collecting data, the purposes of the treatment, the methods of treatment, the subjects to whom the data can be communicated, the identification details of the holder and the methods to protect your data. The Italian Ophthalmological Society has elaborated the informed consent forms. The informed consent is the tool through which information about the intervention is disposable for the patient in order to gather his consent to proceed. Therefore, it presupposes two distinct moments: information and the actual consent. The informed consent consists of the specific information sheet of the intervention about the execution procedure, the post-treatment, advantages and complications. The act of consent is the same for all the information sheets . The article 1, according to the principles set out in articles 2, 13 and 32 of Law 219/2017 of the Constitution, states that no medical treatment starts or continues without the informed consent of the interested person, except in cases expressly provided by law , such as the treatment for the public interest. The Italian legal system establishes the freedom of choice of the care place (article 32 of the Constitution and Legislative Decree 502/92) and the freedom to undergo treatment or not (article 1 of Law 219/2017). According to the GDPR, once citizens have decided to undergo a treatment they cannot refuse consent to the processing of data for treatment and diagnosis purposes. The digital preservation process aims to make a document usable over time, in its "digital essence" and in accordance with the new privacy regulation (European Regulation 679 on protection of personal data) . The Digital Administration Code asserts that the Information Technology (IT) document and its processing recognize the equivalence between a computerized and paper document. Decree of the President of the Council of Ministers of the Italian Republic (DPCM) of 13 November 2014 sets the technical rules for the IT document. DPCM of 3 December 2013 preceded it, published in the Official Gazette number 59 of 12 March 2014, and related to managing IT documents and their preservation. The technical rules on preservation follow international and national standards . The ESF, according to the article 12 of Legislative Decree 179/2012, is an IT tool that combines data and documents (digital or digitized) of health and social health types related to the patients. Its function is to share such data and the patient's medical history among various doctors or health organizations. Authorized healthcare personnel and the patient (with safe modes, e.g. smart card) can access the ESF. Since the article 12 of Decree Law 179/2012 is active, the inclusion of data within the ESF depends on the consent of the patient (article 3 bis). In this regard, you know about who has access to your data and how these data are used. The patient has the right to revoke the consent and to obscure some specific data from the ESF. Although mobile electronic medical records are beneficial, several factors contribute to their not full potential use. Training in functionality and reliable infrastructures might foster tablet implementation . The European Union of Medical Specialists (EUMS) has published a document that contains the "European Definition of Medical Act". This definition, adopted for the first time in 2005, was later amended in 2006 and finally in the Brussels meeting of 25 April 2009. This is the approved version: "The medical act covers all professional activities, for example of scientific, teaching, training, educational, organizational, clinical and medical technology, carried out in order to promote health, prevent diseases, make diagnoses and prescribe therapeutic treatments or rehabilitative treatments for patients, individuals, groups or communities, in the framework of ethical and deontological rules. The medical act is a responsibility of the qualified physician; it must be performed by the doctor or under his direct supervision and/ or prescription". Patients disclosing personal information to their physicians need to protect the same information against incidents to the treatment process. In this sense, there is no distinction between traditional medicine and telemedicine. The duty of confidentiality applies to all information provided to a tele-consulting doctor by a third party such as a healthcare professional . Most patients understand that the primary physician responsible for their case will need to ensure that all team members have the information to fulfill their professional roles in the care process. It is necessary to inform the patients about disclosure of information about them to others involved in their health care. If the doctor decides to divulge confidential information, he must be ready to explain and justify his decision. Because of the need to inform members of the support team, the law assumes the patient's implicit consent to disclose information to them. A final point concerns the circumstances in which the patient's refusal to disclose can be canceled by the doctor or other authorities. These circumstances arise when medical conditions of a patient pose a serious threat to the community in general. Health status data are visible to third parties, such as relatives, family members, voluntary personnel, after the patient, if conscious, has been informed and allowed. In any case, it is necessary to respect the request of the person hospitalized to secret his presence in the health facility or information on his health conditions even to the legitimate third parties . The issue of the access right to the patient's medical record obviously binds to the more general right of access to administrative documents (governed by the article 22 and following Law 241/1990) and to the treatment of personal health data according to the code about personal data protection referred to legislative decree number 196/2003. The public medical record is subject to the productivity of incident effects on subjective legal situations of public relevance (see Criminal Court, Section V, 21 November 2011, Number 42917). The prevailing doctrine attributes juridical relevance to the medical record, as it is the only instrument capable of processing patient care information to allow communication between the various health care workers. The doctor, in charge of compilation, is a public official pursuant to article 357 of penal code, while the person responsible for the storage and good keeping is always the head of the department where the patient is undergoing treatment. The doctor is required a particular accuracy in writing all elements in the medical record: diagnosis, therapy, outcomes, etc., according to article 24 of the Decree of the Minister of Health of 5 August 1977 and arranged according to the procedures set forth in article 26 of the Code of Medical Deontology. The medical record together with the related reports need indefinite preservation, as it is an official act essential to ensure the certainty of clinical data, as well as being a valuable source of documentation for research of historical health data. In this panorama, the question of limits on access to third parties to the information contained in the medical record belonging to another subject has legal importance . Several devices are useful for remote communication, such as video conferencing unit, e-mail, webcam, smartphone, etc. The channel to allow communication is variable (broadband, network, wireless). The legal perspective of confidentiality focuses on the relationship between the subjects involved rather than on the systems by which they communicate, but we must pay attention to this last aspect. We consider the NIS directive as the first step in the European cybersecurity strategy. Approved by the EU Parliament on 6 July 2016, the directive aims to reinforce the security and IT resilience within the Old Continent. The need of NIS directive starts from a consideration: networks, systems, and information services play a vital role in today society. Without them the internal market could not work, so their reliability and security are essential for economic and social activities. The NIS applies to two categories: the operators of essential services that are necessary to the maintenance of basic social and economic activities (such as health, transport and energy companies), and the digital service operators such as search engines, e-commerce platforms, etc. Both of them need to adopt appropriate technical and organizational measures to manage risks and prevent IT incidents. In the event of default, very severe penalties will be present, ranging from a minimum of 12 thousand up to 150 thousand euros. In this regard, it is necessary, that at national level, an intervention group for cyber security and a national authority responsible for the security of networks and information systems are designated in the event of an accident. Furthermore, the NIS establishes the cooperation group composed of representatives of the Member States, the Commission and the European Network and Information Security Agency (ENISA), a team to promote collaboration between the countries of the Union in relation to the security of networks and information systems in order to facilitate the information exchange. The two regulations, i.e. the GDPR and the NIS directive, overlap when an IT security incident involves violation of personal data . The eIDAS is the European regulation governing electronic signatures, money transfers and other types of electronic transactions in the European single market. It has allowed the creation of standards for electronic signatures, digital certificates and other forms of electronic authentication, thus allowing for the replacement of paper documents with digital equivalents that have the same legal value and official recognition in all EU countries. The member countries of the EU are required to recognize electronic signatures that meet the standards set by eIDAS. In particular, it distinguishes three types of signatures inherent sensitive data. The director adheres with local, state, federal, and international guidelines for the acquisition of health information, transmission, and storage of data by interfacing with electronic medical records and widely available archiving and communication systems, preferably using standards-based interoperability protocols. Imaging performance and diagnostic acquisition devices in ophthalmology should follow periodic manufacturer's recommendations. Diagnostic displays need of periodic checking and recalibration for normal function. The security, integrity, and availability of data (including backup and archiving) are the tasks of the IT staff. The setting, i.e. a medical office, urgent care, emergency, or community health care center, should be able to monitor the vital parameters of the test, to make a detailed medical history review, to conduct and transmit telemedicine for ophthalmological examination and to organize appropriate follow-up and care. Telemedicine exams will have different space requirements based on their use of synchronous technology or asynchronous technology. Synchronous visits require space for the patient, the local provider, and the remote provider to conduct the exam in private and discuss the results. Synchronous visits typically require audio and video equipment, a computer to transmit the necessary information about the exam and the devices and technologies to conduct a remote eye examination. Asynchronous visits require space for the necessary imaging equipment, other devices used and space for the preparation and presentation of images and data. Generally, a small space within an existing clinical space is adequate for telemedicine examinations and consultations. There are few normative indications concerning the personnel who acquire, transmit, and interpret the data of telemedicine. Each member of the team has the necessary qualifications established by the program. An activity-based (or "function-based") assessment of staff requirements is necessary. The reading method for image analysis should be transparent to the applicant. The reading center is responsible for reading errors. The basic requirements for staff involved in ophthalmology telemedicine in a remote imaging site are diagnostic equipment, the awareness of the risks and the clinical estimation of ocular complications due to pupillary dilatation (if imaging is under the mydriatic eyewash), universal precautions, antiseptic technique and informed consent. Training for safe and correct contact with patients in a clinical setting, as indicated by applicable hospital and facility standards, is necessary. Training on specific devices and equipment to obtain the necessary certifications and adherence to the quality control for the instruments is important. The ophthalmologists who work as distant doctors should receive initial training and periodic re-evaluation to meet quality standards for the activities performance. The doctor who reads the test must confirm the coverage of responsibility for medical negligence for this activity by the insurer. An expert reader, under the supervision of the physician, bases the current ophthalmology telemedicine on a store-and-forward model with image acquisition for subsequent evaluation. The most recent applications may involve home-based tests with transmission and web-based interactions by the patient . Telemedicine relies heavily on the video and audio transmission of data through telecommunication networks. Secure access to the network and data transmission is essential for the confidentiality of personal and medical data. The promise of this network is that patient information will be electronically available to authorized personnel wherever the patient is or information is necessary. The access needs at least one authentication check (password). The checks ensure that access is available only for authorized users. An appointed individual is responsible for the security of a connected system. All network workers are aware of their responsibilities. All the incidents threatening the security are under control. The most obvious way to reduce the risk of unauthorized access to computer data on the Internet is to control traffic through the interface between the local network and the external Internet. This is a function of a firewall. It is important to know that they cannot protect themselves from traffic that does not cross them. There are different types of firewalls such as the network-wide firewall using a router to make decisions, i.e. what to pass or block based on network protocols, usually Internet Protocol (IP) addresses, an application layer that is a system including a personal computer (PC) with two ports (one for entry and the other one for outgoing traffic). A firewall is a mean of ensuring that only the right traffic passes through. Telemedicine includes four main areas: - live or synchronous audio-video telemedicine, i.e. bi-directional real-time communication between a patient and a healthcare provider using audiovisual telecommunication technologies and data collection; - store-and-forward or asynchronous telemedicine (freely translatable as "store and return"), i.e. electronic transmission of health data (images, text or other digital data) to a healthcare provider for the evaluation and provision of the service using methods other than real-time interaction with the patient; it is a technique in which information subdivided into packets in its path between the single stations (or nodes) of the network, must be totally received, before it can be retransmitted; - Remote patient monitoring (RPM), i.e. collection of health data directly from the patient, usually during the normal activities of daily life, transmitted to a healthcare provider for analysis and possible actions; - Mobile Health: health care, patient communication and training based on mobile communication platforms, e.g. mobile phones, tablets, PCs, etc. Encryption is therefore a powerful help to protect telemedicine transmission. There are two types of encryption algorithms. With secret key encryption, the sender and recipient both use the same key to lock and unlock the message. On the contrary, with public key cryptography each user has two unique keys, i.e. a public key and a private key. In private, the key is useful to encrypt any message sent as a digital signature. The recipient can decrypt the signature with the public key to verify the identity and authenticity of the message. The power of digital signatures is that they also detect very slight changes in a message. Informed users naturally expect that the clinical information on the Internet is high quality, accurate, timely, and evidence-based. Digital native users have less critical faculties. They are easily aware of the validity of what they read on the internet. Several organizations have tried to establish these principles in guidelines or codes of ethical practice for the construction of Internet sites. The Hi-Ethics consortium is a voluntary group aiming to join the websites and the most used health information providers whose goal is to gain consumer confidence in internet health services. The goals of Hi-Ethics are to offer internet services that reflect ethical and high quality standards, to provide reliable and up-to-date health information, to keep private and secure personal information and to take special precautions for personal health information. The Hi-Ethics allows consumers to distinguish online health services following these principles from those that do not . There are two broad categories of equipment used in telemedicine for the ophthalmology program: information acquisition devices (camera, optical coherent tomography, tonometer, autorefractometer, campimeter, etc.), image communication devices (computers, servers, network devices, etc.), to send data. The remarkable technological development of biomedical equipment for therapeutic, rehabilitative, and diagnostic use has certainly brought great benefits, but it has created particular problems for the protection of personnel, patients, and health workers, in the health structures, mostly from electronic risks. Before using any biomedical equipment, make sure it has undergone regular acceptance testing and that the operators have received adequate and specific training on its correct use. For a long time, the CEI EN 60601-1 standard has been the reference for manufacturers and users of electro-medical devices for the diagnosis and treatment of patients. The international work of adaptation of the norm in these last years, after some modifications published in 1991 and 1995, has led to the publication of the third edition in 2006 in which the concept of safety has been expanded to include the aspect of the essential performance of electro medical equipment. The experience also extends to the legal aspects of telemedicine equipment. The basis of this legislative framework is the Consumer Protection Act of 1987 dealing with the general responsibility for artisanal products and applied to the teleconsultations equipment . The regulations apply to Computed Tomography (CT) scanners, X-rays, ultrasounds, etc. All these devices (including new videoconferences and related equipment) must show the European conformity (CE) mark indicating that they comply with the appropriate safety, quality, and performance standards. The Medical Devices Agency (MDA) is also responsible for registering the manufacturer and reporting incidents, as well as for the general implementation and promotion of European directives. MDA has identified the repeated causes of adverse incidents with medical devices such as poor quality, outdated or worn out devices, incompatibility with auxiliary equipment, poor documentation, inappropriate use, inadequate training, maintenance errors, or lack of assistance. These comments mainly refer to malfunctions of the operational equipment . Several devices are useful to communicate remotely, such as a video conferencing unit, e-mail, webcam, or smartphone. The channel allowing communication can be broadband, network, or wireless. The operational risks are those identified by the MDA as the main causes of adverse accidents. The analysis of these causes shows that they fall into two categories: inadequacies due to technology and those due to insufficient staff. We can distinguish four main technological risks involving: - the image quality: a patient has the right to expect that a consultant can draw the same correct conclusions from an image on a telemedicine screen as from a conventional face-to-face visit, particularly important for Ophthalmology surveys; - the lack of suitable equipment for a health service; - malfunctioning equipment: the breakdown of the computer or video equipment is unfortunately one of the most common features of telemedicine; - inadequate guidelines: guideline are as a bridge between technology and participants in teleconsulting. The guidelines determine the teleconsultation process and the documentation provides an archive of therapy, prescriptions, dosages of drugs, plans, etc. This combination of protocol and recording of the action provides a powerful audit trail that can have considerable value in any legal dispute. In Italy, the Directorate General for Medical Devices and the Pharmaceutical Service is responsible for the completion and implementation of the regulation of medical devices, including tasks related to market surveillance, accident monitoring, clinical investigations, and evaluation of technologies and address of Health Technology Assessment (HTA) activities . As regards the operational risks due to the staff, the Commission refers to the definition of "healthcare professionals" as defined in the article 3/f of the Directive 2011/24/EU . According to the Directive 2005/36/EC, healthcare professionals are the doctor, the nurse responsible for the general care, the dentist, the midwife, or the pharmacist. We have principles to ensure that each team member is aware of his responsibilities and those of the other members. The patient should also know who is responsible for his care. These are simple precautions to avoid complaints of negligence. Telemedicine has sensitized people on issues that underline legislative discrepancy about the problem of accreditation and qualifications of health staff to protect the patient from incompetent professionals . The benefits of telemedicine are promising and, particularly, in ophthalmology that is a specialization based on diagnostic imaging. The application of telemedicine is useful for patients, for ophthalmologists and for health institutions as it allows access to medical care, it avoids unnecessary long movements, and it increases the profits by reaching patients, by expanding the spectrum of available services and by reducing health care costs and waiting lists. Telemedicine in ophthalmology allows collaboration between professionals from different locations, sometimes even from different countries. It simplifies, through different telecommunications channels and mobile technologies, direct patient access to ophthalmologists who practice in another city or country without requiring any of the participants to travel. However, some characteristics of telemedicine can become problematic, such as the violation of privacy, the physical distance, the inclusion of new technological methods, the purchase of expensive equipment, the weakening of the doctor/ patient relationship, the involuntary increase in opportunities for incorrect behavior, the delegation of functions due to the remote nature of the treatment process, etc. Health professionals should consider that telemedicine could lead them to face further potential medical errors within a substantially more challenging standard of care. The EU has not yet issued specific rules on medical liability, despite its specific features, and this gap can jeopardize the development of telemedicine in Europe thus denying all its benefits to European patients. However, the EU will not be able to create a uniform regulation covering all aspects of telemedicine, but it has already taken a step forward in terms of technology and privacy, which are in fact already well-defined topics under European law. Regarding the rights of patients in the field of telemedicine, the EU will be able to create some basic guidelines in the context of cross-border healthcare patients' rights. The rights of European patients still lack a set of uniform rules and the standardization in such a complex area as telemedicine perhaps it would not be the best choice. We cannot expect harmonized rules for the medical responsibility deriving from telemedicine. In fact, Member States have very different national laws to address these issues because the EU brings together models of continental law and common law models. Because of this intrinsic difference, the attempt to harmonize civil liability, especially criminal liability, will be doomed to fail. It would be the purpose for each Member State to provide a legal framework for telemedicine, while the role of the EU would be limited to requiring Member States to regulate it. When telemedicine takes place across national borders, or in Europe, there is likely to be even more opportunity for legal discord on which laws to apply, i.e. the laws of the country where the telepatient is present or of the country where the teleconsultation took place. What happens if telemedicine is legal to practice in one country but is not in another? We need to conform a telemedicine task force in ophthalmology involving the academic and research world by using excellent resources, and collaboration between the Italian Ophthalmology Society, the Italian Association of Doctors Ophthalmologists and the Society of Legal Medicine in Ophthalmology to establish the legislation and guidelines of the telemedicine. The health manager or health director of telemedicine is necessary to avoid the delegation of care to less qualified subordinates and to establish their competences and responsibilities. Actually, telemedicine is not a defined discipline to justify the professional accreditation of "Ophthalmologist Specialist in Telemedicine". Tele-education or eLearning should become a standardized process with certification requirement as well as continuous courses are necessary to spread the current European legislation in telemedicine both at the Ministry of Health and Medical Orders levels in all Italian regions. In conclusion, it is a good investment of efforts and resources because the worldwide future of telemedicine in ophthalmology undoubtedly promises well.
Improved chickpea growth, physiology, nutrient assimilation and rhizoremediation of hydrocarbons by bacterial consortia
e9eca537-7475-4ab6-b2a7-45bc7a26ac4a
11490054
Microbiology[mh]
Sustainable food supply for the world’s existing and upcoming population increases the food production demand on existing or degraded lands . Soil contamination with persistent organic pollutants especially the hydrocarbons (PHCs) and their derivatives is a major problem faced by the developing and developed world . The PHCs become part of the soil by accidental spills, oil refineries, waste production, oil exploration and automobile workshops . When these PHCs enter the soil systems, they change the physiochemical properties (like EC, pH, C:N ratio, porosity and water retention capacity) of the soil . The plants grown in PHCs contaminated sites show stunted growth due to production of hydrogen peroxide (H 2 O 2 ), ethylene and reactive oxygen species (ROS) . These metabolites reduce the growth, yield and biomass production of plants by altering the nutrient intake and damaging the proteins, cell membrane and nucleic acid structures . The introduction of PHCs in soil not only reduces plant growth but also decreases the native micro and macro biota . There are several physicochemical methods to remediate the PHCs contents from the soil, but these methods require higher costs for technical labor, and in most cases, they produce harmful byproducts that are even more noxious than parent compounds . However, the plants and microbes coupled bioremediation of PHCs is a suitable technique due to lower cost, more aesthetic value, and maximum chances of complete degradation of PHCs into CO 2 and H 2 O . Various microbes like fungi, algae and bacteria are widely used in the bioremediation of PHCs . Among them, the bacterial communities are acting as a principal degrading agent as they use PHCs as a carbon (C) source for biomass production and reproduction . The bacterial isolates can break down the organic pollutants, but complete mineralization is very rare due to the unavailability of specific enzymes . However, the microbial cultures with different bacterial strains can completely mineralize the PHCs as they increase the bioavailability and mobility of PHCs by exopolysaccharides secretions . This in turn increases the PHCs degradation in soil and eventually overcomes the phytotoxic impacts of PHCs on plant growth . The PHCs removal in the rhizosphere could be enhanced by the proper bioaugmentation strategies . Additionally, the supportive role of microbes through N fixation, siderophore production, phosphate solubilization, catalase, oxidase and ACC-deaminase activities further strengthens the plant microbe’s interaction even under stressful conditions . The plant roots attract and support the nearby microbial communities by the secretion of root exudates which normally contain sugars, vitamins, organic acids, and amino acids . Most root exudates have similar structures to low molecular weight PHCs fractions that’s why the microbial communities utilize them as a C substrate to improve their own growth and degrade the majority of the hydrocarbons in soil . Moreover, the interactive effect of microbes and plants together could be beneficial for better physiology and improved plant growth and yield (i.e., plant height, biomass, number of leaves and grains yield) in PHCs-contaminated soil . Chickpea ( Cicer arietinum L.) has been widely tested by scientists as a suitable candidate for the reclamation of contaminated sites . The plant has a larger and more extensive root system that improves the aeration and stabilization of soil structure and supports the biodegradation of pollutants . The deeper root system enhances the bioavailability of hydrocarbons to several PHCs degrading microbes . Most importantly, the plants can develop symbiotic relationships with N-fixing microbes and in turn improve soil fertility . The exudation of various enzymes and organic acids attracts nearby microbiota, changes the rhizosphere pH and favorably improves soil health by maintaining the nutrient balance for plant growth even under contamination . In leguminous crops, the degradation and accumulation of PHCs are mainly occurred in roots rather than the upper plant parts . However, the PHCs accumulation in leaves and grains is mostly dependent on aerial deposition rather than through root zone . It has been well documented that leguminous crops favor the microbial communities that helps in enhancing PHCs breakdown in rhizosphere . Additionally, most of PHCs have a higher octanol-water partition coefficient (logKow) which is more than three, tend to bound to the root surface and as a result the intake of contaminant in plants is reduced . However, there are still limited studies on hydrocarbon degrading indigenous bacterial consortia to combat hydrocarbon pollution and promote chickpea plant growth under pollutant stress. In general, it is equally crucial to produce high-quality food and to eliminate or reduce the pollutant levels from the same soil. One most effective and environmentally appropriate methods to remove pollutants is to use microbes (especially the mixed bacterial consortia) and plants together . The primary goal of this study was to evaluate the phytotoxicity of PHCs on chickpea plants and to find out how newly constructed bacterial consortia prove beneficial for the degradation of PHCs along with the development and growth of chickpea plants in soil contaminated with PHCs. Collection of soil samples Initially, several PHCs contaminated sites were visited, and the soil samples were taken from the hotspots of the selected regions/sites across Pakistan including Rawalpindi (33°58’15.4” N, 73°15’12.9” E), Abbottabad (34°11’59.7” N, 73°14’29.9” E), Faisalabad (31°25’41.5” N 73°05’23.4” E), Bahawalpur (29°41’72.8"N, 71°66’86.4” E), and Muzaffargarh (30°10’03.7” N 70°56’44.1” E). Preferably, soil samples were collected from moist sites to assure the existence of active microbial communities. Collected soil samples were then immediately packed in sterilized and clean plastic bags and transferred to the research laboratory, Institute of Soil and Environmental Sciences (ISES), University of Agriculture Faisalabad (UAF), Pakistan, and were kept at 4 °C before further use. Thereafter, the selected PHCs-contaminated soil samples were used to set up the microcosm experiments to enrich PHCs degrading microbial culture. Initially, several PHCs contaminated sites were visited, and the soil samples were taken from the hotspots of the selected regions/sites across Pakistan including Rawalpindi (33°58’15.4” N, 73°15’12.9” E), Abbottabad (34°11’59.7” N, 73°14’29.9” E), Faisalabad (31°25’41.5” N 73°05’23.4” E), Bahawalpur (29°41’72.8"N, 71°66’86.4” E), and Muzaffargarh (30°10’03.7” N 70°56’44.1” E). Preferably, soil samples were collected from moist sites to assure the existence of active microbial communities. Collected soil samples were then immediately packed in sterilized and clean plastic bags and transferred to the research laboratory, Institute of Soil and Environmental Sciences (ISES), University of Agriculture Faisalabad (UAF), Pakistan, and were kept at 4 °C before further use. Thereafter, the selected PHCs-contaminated soil samples were used to set up the microcosm experiments to enrich PHCs degrading microbial culture. The selected PHCs-contaminated soil samples were used to set up the first enrichment microcosm experiment. Aerobic incubation was used to enrich microbial consortia in 200-mL Erlenmeyer flasks with 90 mL of C-free mineral salt medium (MSM) having different chemical compositions as described by Suja et al. . After thorough mixing, the final pH of MSM was determined (i.e., 7.0 ± 0.2). Finally, MSM along with other utilities were autoclaved for the setup of microcosm experiments. In the first enrichment experiment, diesel (1% v/v) was added as a source of PHCs, and acetone was used as a carrier solvent. Subsequently, each 200-mL Erlenmeyer flask, containing 90 mL of autoclaved MSM, was supplied with 10.0 ± 0.5 g of soil (as inoculum) that had been collected from the contaminated spots. Autoclaved controls were also made to evaluate the existence of any abiotic degradation of PHCs. Lastly, flasks were kept at room temperature (25.0 ± 0.5 °C) in the dark on a rotary shaker at 150 rpm . For the first enrichment experiments, at least triplicate microcosms were prepared for the soil of each contaminated site. After a successful first enrichment experiment (by checking the remaining PHCs contents and subsequent microbial growth), sequential subculturing was done up to four times to enrich PHCs degrading microbial culture. For each subculturing, 10 mL culture (from the previous enrichment) was added to 90 mL freshly prepared (autoclaved) MSM, and diesel (1% v/v) was also added to the medium. All other incubation conditions were kept the same as described above. Microbial consortia were enriched after sequential transfers and/or subcultures. A spectrophotometer (Thermo Electron Corporation, Evolution-300LC, England) was used to estimate the growth of microbes by determining the optical density (OD) at 600 nm following the methods described by . For the first enrichment, microbial growth was not measured due to the presence of soil particles (which could significantly affect OD 600 values). Similarly, a portable hydrocarbons analyzer (PHA-100 plus, PETROSENSE, CA, USA) was used to determine the remaining contents of PHCs in all microcosm experiments by following the detailed methodology described in our previous studies . For the culture optimization experiment, the two best-enriched cultures (from site 1 and site 5) were further selected based on higher microbial growth (in terms of OD 600 value) and more PHCs degradation (in percentage) basis as compared to site 2, site 3 and site 4. To optimize the performance of PHCs degrading indigenous microbial consortia (developed through sequential transfers and sub-culturing), a separate set of experiments was conducted. For this purpose, PHCs degradation and microbial growth of developed consortia were assessed with different levels of pH and temperatures. Briefly, 10 mL of culture (at 0.5 OD 600 ) from enriched culture was added in the same growth medium (C-free MSM) as described above. For the pH experiment, different pH levels (i.e., 3.5, 5.5, 7.0, 8.5, and 9.5) were obtained by adding an appropriate amount of an acid (HCl, 0.01 N) or an alkali (NaOH, 0.01 N). In the case of the temperature experiment, microcosm bottles were kept at different temperatures (i.e., 4, 15, 25, and 45 °C). All microcosm flasks (except temperature optimization experiment) were kept on a rotating shaker with continuous shaking (@150 rpm) in the dark at 25.0 ± 0.5 °C . After successfully completing the initial trials, the best two sites (i.e., site 1 and site 5) were further used in isolation technique via the agar plate method. Briefly, C-free MSM was prepared once again, and agar was added to it. This media was poured into petri plates after being autoclaved and then 100 µL diesel (which is filter sterilized) was added to all plates and spread equally with the help of an autoclaved spreader. Thereafter, 1 µL microbial culture was properly placed all over the plates to ensure homogeneity. These petri plates were then incubated at room temperature of 25 ± 3 ○ C for 72 h and afterwards, the plates were carefully examined to see the colony formations. The bacterial strains were isolated from the plates by observing their colony size and growth patterns. The same procedure was repeated to cultivate the pure bacterial strains on separately prepared MSM-agar plates. Characterization and identification of selected PHCs degrading bacterial isolates A total of 25 bacterial isolates (9 from site 1 and 16 from site 5) were selected initially based on their morphological characteristics and colony appearance. These bacterial isolates were then used in various biochemical assays to check their potential as a plant growth promoter and stress-tolerant nature. Thereafter, a total of 9 bacterial isolates (four from site 1 and five from site 5) were further selected based on subjected biochemical assays such as catalase, oxidase, siderophore, ACC-deaminase, phosphatase, and exopolysaccharides. Details of the mentioned attributes were already presented in our previous work . After characterization of bacterial isolates based on plant growth promotion, a sufficient amount of each bacterial isolate was collected in microtubes (LB media with 30% glycerin solution) and stored at -80 °C for preservation. For identification of isolated strains, the nine best isolates were then sent to Macrogen, Inc, South Korea for Sanger sequencing, targeting the 16 S rRNA gene (Fig. ). A total of 25 bacterial isolates (9 from site 1 and 16 from site 5) were selected initially based on their morphological characteristics and colony appearance. These bacterial isolates were then used in various biochemical assays to check their potential as a plant growth promoter and stress-tolerant nature. Thereafter, a total of 9 bacterial isolates (four from site 1 and five from site 5) were further selected based on subjected biochemical assays such as catalase, oxidase, siderophore, ACC-deaminase, phosphatase, and exopolysaccharides. Details of the mentioned attributes were already presented in our previous work . After characterization of bacterial isolates based on plant growth promotion, a sufficient amount of each bacterial isolate was collected in microtubes (LB media with 30% glycerin solution) and stored at -80 °C for preservation. For identification of isolated strains, the nine best isolates were then sent to Macrogen, Inc, South Korea for Sanger sequencing, targeting the 16 S rRNA gene (Fig. ). The four bacterial strains including Alcaligenes faecalis strain MH-2 (ON714529), Alcaligenes sp. strain MH-3 (ON714530), Achromobacter denitrificans strain MH-6 (ON714531), Sphingobacterium spiritivorum strain MH-9 (ON714532) were mixed together (by adding 4.0 mL of each strain from their enriched culture) in fresh autoclaved MSM (980 mL) to form bacterial consortia 1. Similarly, five bacterial strains including Sphingobacterium spiritivorum (ON714533 strain MH-10), Achromobacter xylosoxidans strain MH-13 (ON714534), Stenotrophomonas sp. strain MH-18 (ON714535), Alcaligenes faecalis strain MH-22 (ON714536) and Stenotrophomonas rhizophila strain MH- 24 (ON714537) were also mixed (by adding 5.0 mL of each strain from their enriched culture) in newly prepared and autoclaved 980 mL MSM to construct bacterial consortium 2 (BC2). These two bacterial consortia (i.e., BC1 and BC2) were tested for PHCs degradation by rhizoremediation experiment with chickpea in PHCs contaminated soils. For the pot experiment, the soil was collected from the agronomic research area of ISES, UAF Faisalabad. Initial physicochemical attributes of soil were examined by following the prescribed methodologies of Estefan et al. . The seeds of the chickpea, variety NIAB-CH-2016, were obtained from the Nuclear Institute for Agriculture and Biology (NIAB), Faisalabad, Pakistan. Thereafter, a pot trial was carried out in the wirehouse of ISES, UAF, Faisalabad, Pakistan to evaluate the effect of bacterial consortia (i.e., BC1 and BC2) on the rhizoremediation of PHCs and plant growth enhancement of chickpea plant (variety, NIAB-CH-2016) in soil artificially spiked with 1.5% and 3.0% diesel. Six, surface sterilized seeds (un-inoculated or inoculated with culture) of chickpea were sown in respective pots containing 10 kg of unsterilized contaminated or uncontaminated soils. The suggested amount of N (20 kg ha − 1 as urea), P (60 kg ha − 1 as single super-phosphate), and K (60 kg ha − 1 as sulfate of potash) for chickpea was added to all pots . The experiment was carried out with fifteen distinct treatments and three replications, including three levels of PHCs (0.0%, 1.5% and 3.0%) in different combinations with two bacterial consortia BC1 and BC2 (i.e., 50 mL). A total of forty-five experimental pots were placed in wire house by complete randomization and the treatment details of the experiment are presented in Table . Determination of plant growth attributes Seedling emergence was observed from the 2nd day of sowing, but the emergence count was done on the 3rd day because sprouting was not clear for the count till the 2nd day. The seedling was observed daily until a constant count was achieved. At the time of harvest, the root and shoot lengths were taken by a measuring stick. For plant dry weight, samples were placed in an oven (at 65 °C) till a constant weight was obtained on a portable weight balance. Other growth attributes such as the number of leaves, branches, pods and nodules were counted manually at the time of harvesting. Determination of plant physiological attributes Mature fresh leaves were collected from each plant to determine chlorophyll contents such as chlorophyll a (Chl a), chlorophyll b (Chl b), total chlorophyll (T Chl), and carotenoid by following the standard procedures of Arnon . In brief, 0.5 g of fresh leaves were ground in 5 mL (80%) methanol and the material was centrifugated at 5000 rpm at 20 °C for 10 min. After that, the aliquot was separated and collected in vials to run through a UV-visible spectrophotometer, and the OD was measured at distinct wavelengths such as 480, 645, and 663 nm. The SPAD value of the plants were also taken by using a SPAD meter (SPAD-502 Konica, Minolta Optics, Inc, Japan) after 45 d of seed sowing . Similarly, the photochemical quantum yield (YII), fluorescence yield (Ft), photosynthetically active radiation (PAR) and electron transport rate (ETR) of chickpeas were measured by a photosynthetic yield analyzer in sunny bright day at around 12:00 to 01:00 p.m . To calculate the membrane stability index (MSI), the procedure of Sairam et al. was adopted in which the leachate (ions) from plant leaves was collected in deionized water. In brief, test tubes were filled with 0.2 g of fresh leaf samples and 10 mL of deionized water in 2 sets. The first experimental set having same composition, was kept in a water bath for 30 min at 40 °C, and then EC meter was used to record EC1. While for obtaining EC2, the second experimental set was kept at 100 °C in a water bath for 15 min. The Eq.  used for the calculation of MSI is as follows: 1 [12pt]{minimal} $$} = {}[ {1 - ( {1/2} )} ]{}*{}100$$ [12pt]{minimal} $$1{} = {}}}}}}}}$$ [12pt]{minimal} $$1{} = {}}}}}}}}$$ Similarly, the relative water contents (RWC) were also calculated by following the methods outlined by Sairam et al. . Precisely, fresh leaves (FW) having 0.5 g weight were dipped immediately in distilled water to gain maximum turgidity for approximately 4 h. Thereafter, these samples were taken out and the surface of the leaf samples was dried using tissue paper immediately, then weight the samples to gain fully turgid weight (TW). Afterwards, these samples were oven-dried for 2 d at 65 °C to obtain the dry weight (DW). The equation used for the calculation of RWC is as follows: 2 [12pt]{minimal} $$[{}$$ Determination of antioxidative attributes of chickpea plants To measure the total proline contents of plant leaves, the standard protocols of Bates et al. were followed and the brief methodology has already been provided in our previous study . For catalase (CAT) activity, the methodologies described by Ahmad et al. were adopted. For this purpose, an exact amount of 10.4 mL mixture which contains enzyme extract (200 µL), 0.3 M H 2 O 2 (200 µL) and EDTA (0.002 M) with 0.05 M phosphate buffer (10 mL). The pH of the mixture was 7.0. A spectrophotometer at 240 nm was used to measure the induced reduction because of the disappearance of H 2 O 2 . Similarly, for superoxide dismutase (SOD) activity, the reduction in absorbance of nitroblue tetrazolium chloride was observed at 560 nm . Peroxidase (POD) activity of leaves was measured by following an established protocol of Angelini et al. to monitor the absorbance at 436 nm to determine the conversion of guaiacol to tetra-guaiacol. Determination of nutrients attributes of chickpea plants and residual PHCs in soil Plants nutrient analysis were done by following the methodologies of Estefan et al. in which the total K, P, and N contents of digested plant samples were determined via flame photometer, spectrophotometer and Kjeldahl apparatus respectively. In the same way, the residual contents of PHCs in soil were estimated by a portable hydrocarbon analyzer (PHA-100 plus, PETROSENSE, CA, USA) following the detailed protocols and procedures described in our previous works . Statistical analysis The F-test was performed to the obtained data to check the level of significance among different treatment means, and the honest significant difference (HSD) test was applied to check the significance among various treatment means by using Statistix 8.1 software The analysis of variance (ANOVA) was applied to the acquired data and the variations in mean values ( n = 3) were presented by the standard error. Correlation matrix and principal component analyses (PCA) were also done to check the correlation between different treatments by employing Origin 2022b software . Seedling emergence was observed from the 2nd day of sowing, but the emergence count was done on the 3rd day because sprouting was not clear for the count till the 2nd day. The seedling was observed daily until a constant count was achieved. At the time of harvest, the root and shoot lengths were taken by a measuring stick. For plant dry weight, samples were placed in an oven (at 65 °C) till a constant weight was obtained on a portable weight balance. Other growth attributes such as the number of leaves, branches, pods and nodules were counted manually at the time of harvesting. Mature fresh leaves were collected from each plant to determine chlorophyll contents such as chlorophyll a (Chl a), chlorophyll b (Chl b), total chlorophyll (T Chl), and carotenoid by following the standard procedures of Arnon . In brief, 0.5 g of fresh leaves were ground in 5 mL (80%) methanol and the material was centrifugated at 5000 rpm at 20 °C for 10 min. After that, the aliquot was separated and collected in vials to run through a UV-visible spectrophotometer, and the OD was measured at distinct wavelengths such as 480, 645, and 663 nm. The SPAD value of the plants were also taken by using a SPAD meter (SPAD-502 Konica, Minolta Optics, Inc, Japan) after 45 d of seed sowing . Similarly, the photochemical quantum yield (YII), fluorescence yield (Ft), photosynthetically active radiation (PAR) and electron transport rate (ETR) of chickpeas were measured by a photosynthetic yield analyzer in sunny bright day at around 12:00 to 01:00 p.m . To calculate the membrane stability index (MSI), the procedure of Sairam et al. was adopted in which the leachate (ions) from plant leaves was collected in deionized water. In brief, test tubes were filled with 0.2 g of fresh leaf samples and 10 mL of deionized water in 2 sets. The first experimental set having same composition, was kept in a water bath for 30 min at 40 °C, and then EC meter was used to record EC1. While for obtaining EC2, the second experimental set was kept at 100 °C in a water bath for 15 min. The Eq.  used for the calculation of MSI is as follows: 1 [12pt]{minimal} $$} = {}[ {1 - ( {1/2} )} ]{}*{}100$$ [12pt]{minimal} $$1{} = {}}}}}}}}$$ [12pt]{minimal} $$1{} = {}}}}}}}}$$ Similarly, the relative water contents (RWC) were also calculated by following the methods outlined by Sairam et al. . Precisely, fresh leaves (FW) having 0.5 g weight were dipped immediately in distilled water to gain maximum turgidity for approximately 4 h. Thereafter, these samples were taken out and the surface of the leaf samples was dried using tissue paper immediately, then weight the samples to gain fully turgid weight (TW). Afterwards, these samples were oven-dried for 2 d at 65 °C to obtain the dry weight (DW). The equation used for the calculation of RWC is as follows: 2 [12pt]{minimal} $$[{}$$ Determination of antioxidative attributes of chickpea plants To measure the total proline contents of plant leaves, the standard protocols of Bates et al. were followed and the brief methodology has already been provided in our previous study . For catalase (CAT) activity, the methodologies described by Ahmad et al. were adopted. For this purpose, an exact amount of 10.4 mL mixture which contains enzyme extract (200 µL), 0.3 M H 2 O 2 (200 µL) and EDTA (0.002 M) with 0.05 M phosphate buffer (10 mL). The pH of the mixture was 7.0. A spectrophotometer at 240 nm was used to measure the induced reduction because of the disappearance of H 2 O 2 . Similarly, for superoxide dismutase (SOD) activity, the reduction in absorbance of nitroblue tetrazolium chloride was observed at 560 nm . Peroxidase (POD) activity of leaves was measured by following an established protocol of Angelini et al. to monitor the absorbance at 436 nm to determine the conversion of guaiacol to tetra-guaiacol. To measure the total proline contents of plant leaves, the standard protocols of Bates et al. were followed and the brief methodology has already been provided in our previous study . For catalase (CAT) activity, the methodologies described by Ahmad et al. were adopted. For this purpose, an exact amount of 10.4 mL mixture which contains enzyme extract (200 µL), 0.3 M H 2 O 2 (200 µL) and EDTA (0.002 M) with 0.05 M phosphate buffer (10 mL). The pH of the mixture was 7.0. A spectrophotometer at 240 nm was used to measure the induced reduction because of the disappearance of H 2 O 2 . Similarly, for superoxide dismutase (SOD) activity, the reduction in absorbance of nitroblue tetrazolium chloride was observed at 560 nm . Peroxidase (POD) activity of leaves was measured by following an established protocol of Angelini et al. to monitor the absorbance at 436 nm to determine the conversion of guaiacol to tetra-guaiacol. Plants nutrient analysis were done by following the methodologies of Estefan et al. in which the total K, P, and N contents of digested plant samples were determined via flame photometer, spectrophotometer and Kjeldahl apparatus respectively. In the same way, the residual contents of PHCs in soil were estimated by a portable hydrocarbon analyzer (PHA-100 plus, PETROSENSE, CA, USA) following the detailed protocols and procedures described in our previous works . The F-test was performed to the obtained data to check the level of significance among different treatment means, and the honest significant difference (HSD) test was applied to check the significance among various treatment means by using Statistix 8.1 software The analysis of variance (ANOVA) was applied to the acquired data and the variations in mean values ( n = 3) were presented by the standard error. Correlation matrix and principal component analyses (PCA) were also done to check the correlation between different treatments by employing Origin 2022b software . Physicochemical attributes of soil The texture of the soil was sandy loam with 53% sand, 25% silt and 22% clay contents. The pH, electrical conductivity (EC), exchangeable cations, and organic matter of soil were 7.95 ± 0.17, 1.07 ± 0.28 dS m – 1 , 16.3 ± 1.43 mmolc L – 1 , and 0.73 ± 0.05%, respectively. Moreover, total N, available P, and K were 180 ± 3.20 mg kg – 1 , 86.7 ± 1.09 mg kg – 1 , and 58.4 ± 2.35 mg kg – 1 , respectively. However, the PHCs contents in soil were not detectable. Microcosm enrichment and culture optimization experiments In the first enrichment experiment, after two weeks of incubation, the removal of PHCs was significantly increased in all microcosm experiments (except site 3) than that of autoclaved control (Fig. S2 A). In the subsequent subculturing, PHCs remediation and microbial growth on PHCs in MSM were further improved [Fig. S2 (B and C)]. Moreover, after three sequential transfers, the soil-free enriched culture was capable to remediate 92.0% (site 1), 66.3% (site 2), 43.4% (site 3), 61.9% (site 4), and 93.5% (site 5) of initially-added concentration of PHCs after 24 d incubation (Fig. A). Here, the microbial growth from site 1 and site 5 showed higher growth in terms of OD 600 (Fig. B and Fig. ). The culture optimization experiments showed that enriched cultures were capable to remediate PHCs at a wide range of pH and temperature (Figs. and ); however, cultures demonstrated highest removal at neutral pH and room temperature (i.e., 25 °C) as presented in Figs. and . In general, during culture optimization experiments, enriched cultures obtained from site 1 and site 5 performed better in case of growth (in terms of OD 600 ) and PHCs removal at all tested levels of pH and temperature (Figs. and ) and therefore further selected for screening, plant growth promotion and rhizoremediation experiments. Plant growth promotion and rhizoremediation experiments After the culture optimization experiment, four bacterial strains were screened and selected from site 1 and five from site 5 and mixed to construct BC1 (from site 1) and BC2 (from site 5). Thereafter, the prepared bacterial consortia (BC1 and BC2) were used with chickpea plants in soil artificially spiked with diesel. The results of the plant growth promotion and rhizoremediation experiment are provided in detail below. Seedling emergence and plant growth attributes The PHCs contamination substantially reduced seedling emergence ( Table ) . Furthermore, regardless of the treatments, most of the seedling were emerged after 2 days of seed sowing. However, the addition of BC significantly improved the seedling emergence than that of control (without PHCs and BC). In uncontaminated soil, the addition of BC1 (5.89%) and BC2 (11.1%) resulted in more sprouting rather than uninoculated control ( Table ) . In the case of 1.5% PHCs contamination, the addition of BC1 and BC2 caused an increase of 10.6% and 16.7%, respectively in seedling emergence showing the critical role that microbes play in germination. However, at higher level of contamination (i.e. 3.0% PHCs) both bacterial consortia did not show any significant improvement. The PHCs showed significant phytotoxicity ( P ≤ 0.05) to chickpea plants and reduced the growth attributes (Fig. and Table ). Under PHCs contamination (1.5% and 3.0%), a significant decrease was noticed in root length (38.8% and 58.9%), shoot length (24.8% and 53.3%), root dry weight (34.7% and 53.4%), shoot dry weight (37.5% and 65.5%), number of nodules (50.0% and 82.1%), number of pods (41.2% and 88.1%), number of stems (50.0% and 60.0%), and number of compound leaves per plants (44.3% and 73.7%), when compared with uncontaminated and uninoculated control. The treatments containing microbial cultures in the absence of PHCs stress improved all these mentioned growth attributes as compared to the control (uncontaminated and uninoculated control). In the absence of PHCs contamination and the inclusion of bacterial consortia (BC1 and BC2), the root length (15.2% and 27.9%), shoot length (15.3% and 21.6%), root dry weight (17.5% and 24.0%), shoot dry weight (30.9% and 41.9%), number of nodules (34.9% and 44.0%), number of pods (15.0% and 26.1%), number of stems (23.1% and 28.6%) and number of compound leaves per plants (15.9% and 28.8%) than that of uncontaminated and uninoculated control (Fig. and Table ). Similarly, at 1.5% PHCs contamination, the inclusion of both cultures (BC1 and BC2) reduced the phytotoxicity of PHCs on plant growth attributes and improved the root length (14.5% and 26.7%), shoot length (14.1% and 16.8%), root dry weight (19.2% and 20.3%), shoot dry weight (11.2% and 25.3%), number of nodules (30.0% and 36.4%), number of pods (9.09% and 18.4%), number of stems (28.6% and 37.5%) and number of leaves (compound) per plant (23.1% and 32.1%), respectively in comparison with respective uninoculated control. Likewise, at 3.0% PHCs contamination, the addition of both cultures (BC1 and BC2) increased the root length (24.6% and 37.5%), shoot length (20.9% and 24.6%), root dry weight (19.6% and 23.7%), shoot dry weight (17.8% and 30.1%), number of nodules (37.5% and 54.5%), number of pods (27.3% and 27.3%), number of stems (20.0% and 20.0%), and number of leaves (compound) per plant (25.0% and 38.8%), respectively over their respective uninoculated control (Fig. and Table ). Physiological attributes of chickpea plants Petroleum hydrocarbon phytotoxicity significantly ( P ≤ 0.05) reduced the RWC and MSI of chickpea plants. In the case of 1.5% PHCs contamination, the RWC and MSI were reduced by 19.8% and 23.7%, respectively compared to uninoculated and uncontaminated control. Likewise, at 3.0% PHCs contamination, a decrease of 40.9% (in MSI) and 48.9% (RWC) has been observed when compared with unamended control (without PHCs and BC). However, the inclusion of bacterial consortia (BC1 and BC2) produced (7.34% and 9.92%), (9.97% and 11.3%) higher RWC and MSI as compared to control treatment. Similarly, an increase of 12.1% and 16.1% in MSI and 10.8% and 13.3% in RWC was also observed in BC1 and BC2 applied treatments, respectively under 1.5% PHCs contamination, over their respective uninoculated control (with 1.5% PHCs). Additionally, in the case of 3.0% PHCs contamination, there was about 25.6% and 17.1% increase in BC1 applied treatment (in case of MSI) and 28.3% and 19.4% increase in BC2 applied treatment (in case of RWC) was noticed than that of their uninoculated control (with 3.0% PHCs) (Fig. ). In the present study, the results revealed that the PHCs significantly ( P ≤ 0.05) reduced the Chl a, Chl b, T Chl, and carotenoid contents of chickpea plants (Fig. ). The Chl a, Chl b, T Chl and carotenoid contents were reduced by 21.9% and 65.7%, 31.1% and 68.4%, 26.1% and 66.9%, 40.5% and 84.8% under 1.5% and 3.0% PHCs contamination respectively, in comparison to uncontaminated control (without PHCs and BC). The treatments containing bacterial consortia (BC1 and BC2) generated higher Chl a (6.20% and 10.3%), Chl b (4.43% and 10.9%), T Chl (5.41% and 10.6%) and carotenoids contents (15.3% and 27.6%) than that of uninoculated and uncontaminated control. However, the treatments containing 1.5% PHCs contamination with the application of bacterial consortia i.e., BC1 and BC2, increased the Chl a, Chl b, T Chl and carotenoid contents by 9.13% and 15.6%, 7.91% and 6.46%, 8.62% and 11.9% and 3.01% and 11.2%, respectively over their uninoculated control (with 1.5% PHCs but without BC) (see Fig. ). Additionally, the application of BC1 and BC2 in 3.0% PHCs contamination, showed an increase of 13.5% and 22.6% in Chl a, 12.5% and 27.6% in Chl b, 13.1% and 24.9% in T Chl, and 29.6% and 22.1% in carotenoid contents, respectively as compared to their uninoculated control (with 3.0% PHCs but without BC). The toxic effects of PHCs on chickpea plants significantly ( P ≤ 0.05) reduced the SPAD value, Ft, PAR, YII, and ETR of chickpea plants (Table ). In the case of PHCs contamination (1.5% and 3.0%), a significant decrease was observed in SPAD value (29.7% and 44.4%), Ft (29.7% and 55.5%), PAR (8.19% and 17.9%), YII (13.1% and 37.3%), ETR (17.2% and 29.4%), over their unamended control (in the absence of PHCs and BC). The treatments containing both BC produced higher physiological yield attributes as compared to uninoculated and uncontaminated control (without BC and PHCs). The SPAD value (5.29% and 10.5%), Ft (21.1% and 35.5%), PAR (4.92% and 6.83%), YII (4.59% and 17.2%) and ETR (23.6% and 39.2%) of chickpea plants was also improved in the presence of bacterial consortia (BC1 and BC2) (see Table ). Similarly, the inclusion of BC1 and BC2 in 1.5% PHCs contamination resulted in an increase of 22.1% and 25.9% in SPAD value, 26.1% and 29.3% in Ft, 2.76% and 4.13% in PAR, 2.86% and 12.9% in YII and 13.5% and 24.4% in ETR over their uninoculated control (with 1.5% PHCs but without BC). Likewise, BC1 and BC2 applied treatments showed an improved SPAD value (by 15.1% and 20.5%), Ft (by 26.6% and 37.4%), PAR (by 7.74% and 10.2%), YII (by 13.8% and 18.6%) and ETR (by 12.6% and 26.5%), as compared to their relative control (with 3.0% PHCs but without BC). Antioxidative enzymes of chickpea plants The phytotoxicity of PHCs to chickpea plants significantly ( P ≤ 0.05) increased the proline, CAT, SOD, and POD contents (Table ). The proline, CAT, SOD, and POD contents were increased by 46.4% and 62.2%, 61.6% and 74.3%, 32.5% and 45.6%, 41.4% and 51.9% under 1.5% and 3.0% PHCs contamination respectively, over their uncontaminated control (without PHCs and BC). However, no significant improvement in proline contents was observed in uncontaminated and inoculated treatments (treatments without PHCs but with BC). However, the treatments containing 1.5% PHCs contamination with the presence of BC1 and BC2, a substantial decrease of 16.9% and 23.6% in proline, 21.9% and 24.0% CAT, 18.9% and 23.7% in SOD and 17.8% and 22.7% in POD contents, respectively was observed in comparison to uninoculated control (with 1.5% PHCs but without BC) (Table ). Similarly, the application of bacterial consortia i.e., BC1 and BC2 in 3.0% PHCs contamination, showed a reduction of 13.5% and 22.6% in proline, 12.5% and 27.6% in CAT, 13.1% and 24.9% SOD, and 29.6% and 22.1% POD contents, respectively were examined as compared to respective control (with 3.0% PHCs but without BC). Nutrient acquisition in plants and removal of hydrocarbons from soil The toxic impacts of PHCs on chickpea plants was significantly ( P ≤ 0.05) decrease the N, P, and K contents of chickpea plants (Table ). The N, P, and K contents were decreased by 36.7% and 56.0%, 31.3% and 49.6% and 26.3% and 42.6%, under 1.5% and 3.0% PHCs contamination, respectively than that of uncontaminated control (without PHCs and BC). However, the treatments having bacterial consortia (i.e., BC1 and BC2) in the absence of PHCs, produced higher N (i.e., 12.6% and 18.3%), P (i.e., 15.1% and 17.8%) and K contents (i.e., 18.3% and 20.5%) than that of uninoculated and uncontaminated control (without BC and PHCs). However, in the case of 1.5% PHCs contamination and with the addition of BC1 and BC2, a decrease of 20.1% and 28.8% in N, 14.5% and 25.9% in P, and 22.8% and 24.4% in K contents, respectively was noticed than respective control (with 1.5% PHCs but without BC) (see Table ). Additionally, the application of bacterial consortia i.e., BC1 and BC2 in 3.0% PHCs contamination, showed 21.0% and 25.4% higher N, 12.3% and 16.9% higher P, and 18.2% and 27.7% higher K contents, respectively as compared to their respective control (with 3.0% PHCs but without BC). Overall, in the case of hydrocarbons removal, both BC caused a substantial reduction in PHCs contents in planted as well as non-planted treatments (Fig. ). Approximately, a loss of 32.0% and 22.0% of initially applied concentrations of PHCs at 1.5% and 3.0%, respectively was observed after 140 d in un-inoculated controls and unplanted treatments. However, the chickpea plant alone phytoremediate approximately 52.0% and 35.0% of initial concentrations of PHCs when grown in 1.5% and 3.0% PHCs, respectively (Fig. ). While the treatments containing different BC removed higher amounts of PHCs than that of their uninoculated controls (chickpea planted or non-planted treatments but without BC). Approximately, 32.0% and 59.0% PHCs removal (of initial concentrations) was observed in treatments containing BC1 and BC2, respectively in the absence of chickpea plants (at 1.5% PHCs). However, approximately, 72.0% and 76.0% PHCs removal (of initial concentrations) was observed in treatments containing BC1 and BC2, respectively in the planted treatment (at 1.5% PHCs) which is significantly higher as compared to unplanted controls (with BC). Likewise, about 45.0% and 52.0% PHCs removal (of initial concentrations) was observed in treatments containing BC1 and BC2, respectively in the presence of chickpea plants (at 3.0% PHCs) which is significantly higher than unplanted controls at 3.0% PHCs (35.0% for BC1 and 41.0% for BC2). Correlation matrix and principal component analysis To examine the association or connection between various growth, physiological, antioxidative, and nutritional characteristics of chickpea plants, a correlation analysis was done on mean values of the obtained data (Fig ). A strong positive correlation among physiological, growth, and nutritional characteristics (i.e., 0.87 to 0.99) was observed when BC was applied in uncontaminated soil. However, a strong negative correlation (i.e., -0.72 to -0.98) was seen between the growth, physiological, and nutrient attributes with antioxidant contents and/or petroleum hydrocarbon contamination. Moreover, a strong positive correlation was noticed between antioxidative contents and hydrocarbon contamination levels. The importance of different treatments with or without the presence of hydrocarbons was presented by PCA. A higher cumulative variation of 97.8% was shown by PC1 (95.2%) and PC2 (2.63%). Overall, a greater variation among applied treatments is illustrated in Fig . The texture of the soil was sandy loam with 53% sand, 25% silt and 22% clay contents. The pH, electrical conductivity (EC), exchangeable cations, and organic matter of soil were 7.95 ± 0.17, 1.07 ± 0.28 dS m – 1 , 16.3 ± 1.43 mmolc L – 1 , and 0.73 ± 0.05%, respectively. Moreover, total N, available P, and K were 180 ± 3.20 mg kg – 1 , 86.7 ± 1.09 mg kg – 1 , and 58.4 ± 2.35 mg kg – 1 , respectively. However, the PHCs contents in soil were not detectable. In the first enrichment experiment, after two weeks of incubation, the removal of PHCs was significantly increased in all microcosm experiments (except site 3) than that of autoclaved control (Fig. S2 A). In the subsequent subculturing, PHCs remediation and microbial growth on PHCs in MSM were further improved [Fig. S2 (B and C)]. Moreover, after three sequential transfers, the soil-free enriched culture was capable to remediate 92.0% (site 1), 66.3% (site 2), 43.4% (site 3), 61.9% (site 4), and 93.5% (site 5) of initially-added concentration of PHCs after 24 d incubation (Fig. A). Here, the microbial growth from site 1 and site 5 showed higher growth in terms of OD 600 (Fig. B and Fig. ). The culture optimization experiments showed that enriched cultures were capable to remediate PHCs at a wide range of pH and temperature (Figs. and ); however, cultures demonstrated highest removal at neutral pH and room temperature (i.e., 25 °C) as presented in Figs. and . In general, during culture optimization experiments, enriched cultures obtained from site 1 and site 5 performed better in case of growth (in terms of OD 600 ) and PHCs removal at all tested levels of pH and temperature (Figs. and ) and therefore further selected for screening, plant growth promotion and rhizoremediation experiments. After the culture optimization experiment, four bacterial strains were screened and selected from site 1 and five from site 5 and mixed to construct BC1 (from site 1) and BC2 (from site 5). Thereafter, the prepared bacterial consortia (BC1 and BC2) were used with chickpea plants in soil artificially spiked with diesel. The results of the plant growth promotion and rhizoremediation experiment are provided in detail below. The PHCs contamination substantially reduced seedling emergence ( Table ) . Furthermore, regardless of the treatments, most of the seedling were emerged after 2 days of seed sowing. However, the addition of BC significantly improved the seedling emergence than that of control (without PHCs and BC). In uncontaminated soil, the addition of BC1 (5.89%) and BC2 (11.1%) resulted in more sprouting rather than uninoculated control ( Table ) . In the case of 1.5% PHCs contamination, the addition of BC1 and BC2 caused an increase of 10.6% and 16.7%, respectively in seedling emergence showing the critical role that microbes play in germination. However, at higher level of contamination (i.e. 3.0% PHCs) both bacterial consortia did not show any significant improvement. The PHCs showed significant phytotoxicity ( P ≤ 0.05) to chickpea plants and reduced the growth attributes (Fig. and Table ). Under PHCs contamination (1.5% and 3.0%), a significant decrease was noticed in root length (38.8% and 58.9%), shoot length (24.8% and 53.3%), root dry weight (34.7% and 53.4%), shoot dry weight (37.5% and 65.5%), number of nodules (50.0% and 82.1%), number of pods (41.2% and 88.1%), number of stems (50.0% and 60.0%), and number of compound leaves per plants (44.3% and 73.7%), when compared with uncontaminated and uninoculated control. The treatments containing microbial cultures in the absence of PHCs stress improved all these mentioned growth attributes as compared to the control (uncontaminated and uninoculated control). In the absence of PHCs contamination and the inclusion of bacterial consortia (BC1 and BC2), the root length (15.2% and 27.9%), shoot length (15.3% and 21.6%), root dry weight (17.5% and 24.0%), shoot dry weight (30.9% and 41.9%), number of nodules (34.9% and 44.0%), number of pods (15.0% and 26.1%), number of stems (23.1% and 28.6%) and number of compound leaves per plants (15.9% and 28.8%) than that of uncontaminated and uninoculated control (Fig. and Table ). Similarly, at 1.5% PHCs contamination, the inclusion of both cultures (BC1 and BC2) reduced the phytotoxicity of PHCs on plant growth attributes and improved the root length (14.5% and 26.7%), shoot length (14.1% and 16.8%), root dry weight (19.2% and 20.3%), shoot dry weight (11.2% and 25.3%), number of nodules (30.0% and 36.4%), number of pods (9.09% and 18.4%), number of stems (28.6% and 37.5%) and number of leaves (compound) per plant (23.1% and 32.1%), respectively in comparison with respective uninoculated control. Likewise, at 3.0% PHCs contamination, the addition of both cultures (BC1 and BC2) increased the root length (24.6% and 37.5%), shoot length (20.9% and 24.6%), root dry weight (19.6% and 23.7%), shoot dry weight (17.8% and 30.1%), number of nodules (37.5% and 54.5%), number of pods (27.3% and 27.3%), number of stems (20.0% and 20.0%), and number of leaves (compound) per plant (25.0% and 38.8%), respectively over their respective uninoculated control (Fig. and Table ). Physiological attributes of chickpea plants Petroleum hydrocarbon phytotoxicity significantly ( P ≤ 0.05) reduced the RWC and MSI of chickpea plants. In the case of 1.5% PHCs contamination, the RWC and MSI were reduced by 19.8% and 23.7%, respectively compared to uninoculated and uncontaminated control. Likewise, at 3.0% PHCs contamination, a decrease of 40.9% (in MSI) and 48.9% (RWC) has been observed when compared with unamended control (without PHCs and BC). However, the inclusion of bacterial consortia (BC1 and BC2) produced (7.34% and 9.92%), (9.97% and 11.3%) higher RWC and MSI as compared to control treatment. Similarly, an increase of 12.1% and 16.1% in MSI and 10.8% and 13.3% in RWC was also observed in BC1 and BC2 applied treatments, respectively under 1.5% PHCs contamination, over their respective uninoculated control (with 1.5% PHCs). Additionally, in the case of 3.0% PHCs contamination, there was about 25.6% and 17.1% increase in BC1 applied treatment (in case of MSI) and 28.3% and 19.4% increase in BC2 applied treatment (in case of RWC) was noticed than that of their uninoculated control (with 3.0% PHCs) (Fig. ). In the present study, the results revealed that the PHCs significantly ( P ≤ 0.05) reduced the Chl a, Chl b, T Chl, and carotenoid contents of chickpea plants (Fig. ). The Chl a, Chl b, T Chl and carotenoid contents were reduced by 21.9% and 65.7%, 31.1% and 68.4%, 26.1% and 66.9%, 40.5% and 84.8% under 1.5% and 3.0% PHCs contamination respectively, in comparison to uncontaminated control (without PHCs and BC). The treatments containing bacterial consortia (BC1 and BC2) generated higher Chl a (6.20% and 10.3%), Chl b (4.43% and 10.9%), T Chl (5.41% and 10.6%) and carotenoids contents (15.3% and 27.6%) than that of uninoculated and uncontaminated control. However, the treatments containing 1.5% PHCs contamination with the application of bacterial consortia i.e., BC1 and BC2, increased the Chl a, Chl b, T Chl and carotenoid contents by 9.13% and 15.6%, 7.91% and 6.46%, 8.62% and 11.9% and 3.01% and 11.2%, respectively over their uninoculated control (with 1.5% PHCs but without BC) (see Fig. ). Additionally, the application of BC1 and BC2 in 3.0% PHCs contamination, showed an increase of 13.5% and 22.6% in Chl a, 12.5% and 27.6% in Chl b, 13.1% and 24.9% in T Chl, and 29.6% and 22.1% in carotenoid contents, respectively as compared to their uninoculated control (with 3.0% PHCs but without BC). The toxic effects of PHCs on chickpea plants significantly ( P ≤ 0.05) reduced the SPAD value, Ft, PAR, YII, and ETR of chickpea plants (Table ). In the case of PHCs contamination (1.5% and 3.0%), a significant decrease was observed in SPAD value (29.7% and 44.4%), Ft (29.7% and 55.5%), PAR (8.19% and 17.9%), YII (13.1% and 37.3%), ETR (17.2% and 29.4%), over their unamended control (in the absence of PHCs and BC). The treatments containing both BC produced higher physiological yield attributes as compared to uninoculated and uncontaminated control (without BC and PHCs). The SPAD value (5.29% and 10.5%), Ft (21.1% and 35.5%), PAR (4.92% and 6.83%), YII (4.59% and 17.2%) and ETR (23.6% and 39.2%) of chickpea plants was also improved in the presence of bacterial consortia (BC1 and BC2) (see Table ). Similarly, the inclusion of BC1 and BC2 in 1.5% PHCs contamination resulted in an increase of 22.1% and 25.9% in SPAD value, 26.1% and 29.3% in Ft, 2.76% and 4.13% in PAR, 2.86% and 12.9% in YII and 13.5% and 24.4% in ETR over their uninoculated control (with 1.5% PHCs but without BC). Likewise, BC1 and BC2 applied treatments showed an improved SPAD value (by 15.1% and 20.5%), Ft (by 26.6% and 37.4%), PAR (by 7.74% and 10.2%), YII (by 13.8% and 18.6%) and ETR (by 12.6% and 26.5%), as compared to their relative control (with 3.0% PHCs but without BC). Petroleum hydrocarbon phytotoxicity significantly ( P ≤ 0.05) reduced the RWC and MSI of chickpea plants. In the case of 1.5% PHCs contamination, the RWC and MSI were reduced by 19.8% and 23.7%, respectively compared to uninoculated and uncontaminated control. Likewise, at 3.0% PHCs contamination, a decrease of 40.9% (in MSI) and 48.9% (RWC) has been observed when compared with unamended control (without PHCs and BC). However, the inclusion of bacterial consortia (BC1 and BC2) produced (7.34% and 9.92%), (9.97% and 11.3%) higher RWC and MSI as compared to control treatment. Similarly, an increase of 12.1% and 16.1% in MSI and 10.8% and 13.3% in RWC was also observed in BC1 and BC2 applied treatments, respectively under 1.5% PHCs contamination, over their respective uninoculated control (with 1.5% PHCs). Additionally, in the case of 3.0% PHCs contamination, there was about 25.6% and 17.1% increase in BC1 applied treatment (in case of MSI) and 28.3% and 19.4% increase in BC2 applied treatment (in case of RWC) was noticed than that of their uninoculated control (with 3.0% PHCs) (Fig. ). In the present study, the results revealed that the PHCs significantly ( P ≤ 0.05) reduced the Chl a, Chl b, T Chl, and carotenoid contents of chickpea plants (Fig. ). The Chl a, Chl b, T Chl and carotenoid contents were reduced by 21.9% and 65.7%, 31.1% and 68.4%, 26.1% and 66.9%, 40.5% and 84.8% under 1.5% and 3.0% PHCs contamination respectively, in comparison to uncontaminated control (without PHCs and BC). The treatments containing bacterial consortia (BC1 and BC2) generated higher Chl a (6.20% and 10.3%), Chl b (4.43% and 10.9%), T Chl (5.41% and 10.6%) and carotenoids contents (15.3% and 27.6%) than that of uninoculated and uncontaminated control. However, the treatments containing 1.5% PHCs contamination with the application of bacterial consortia i.e., BC1 and BC2, increased the Chl a, Chl b, T Chl and carotenoid contents by 9.13% and 15.6%, 7.91% and 6.46%, 8.62% and 11.9% and 3.01% and 11.2%, respectively over their uninoculated control (with 1.5% PHCs but without BC) (see Fig. ). Additionally, the application of BC1 and BC2 in 3.0% PHCs contamination, showed an increase of 13.5% and 22.6% in Chl a, 12.5% and 27.6% in Chl b, 13.1% and 24.9% in T Chl, and 29.6% and 22.1% in carotenoid contents, respectively as compared to their uninoculated control (with 3.0% PHCs but without BC). The toxic effects of PHCs on chickpea plants significantly ( P ≤ 0.05) reduced the SPAD value, Ft, PAR, YII, and ETR of chickpea plants (Table ). In the case of PHCs contamination (1.5% and 3.0%), a significant decrease was observed in SPAD value (29.7% and 44.4%), Ft (29.7% and 55.5%), PAR (8.19% and 17.9%), YII (13.1% and 37.3%), ETR (17.2% and 29.4%), over their unamended control (in the absence of PHCs and BC). The treatments containing both BC produced higher physiological yield attributes as compared to uninoculated and uncontaminated control (without BC and PHCs). The SPAD value (5.29% and 10.5%), Ft (21.1% and 35.5%), PAR (4.92% and 6.83%), YII (4.59% and 17.2%) and ETR (23.6% and 39.2%) of chickpea plants was also improved in the presence of bacterial consortia (BC1 and BC2) (see Table ). Similarly, the inclusion of BC1 and BC2 in 1.5% PHCs contamination resulted in an increase of 22.1% and 25.9% in SPAD value, 26.1% and 29.3% in Ft, 2.76% and 4.13% in PAR, 2.86% and 12.9% in YII and 13.5% and 24.4% in ETR over their uninoculated control (with 1.5% PHCs but without BC). Likewise, BC1 and BC2 applied treatments showed an improved SPAD value (by 15.1% and 20.5%), Ft (by 26.6% and 37.4%), PAR (by 7.74% and 10.2%), YII (by 13.8% and 18.6%) and ETR (by 12.6% and 26.5%), as compared to their relative control (with 3.0% PHCs but without BC). The phytotoxicity of PHCs to chickpea plants significantly ( P ≤ 0.05) increased the proline, CAT, SOD, and POD contents (Table ). The proline, CAT, SOD, and POD contents were increased by 46.4% and 62.2%, 61.6% and 74.3%, 32.5% and 45.6%, 41.4% and 51.9% under 1.5% and 3.0% PHCs contamination respectively, over their uncontaminated control (without PHCs and BC). However, no significant improvement in proline contents was observed in uncontaminated and inoculated treatments (treatments without PHCs but with BC). However, the treatments containing 1.5% PHCs contamination with the presence of BC1 and BC2, a substantial decrease of 16.9% and 23.6% in proline, 21.9% and 24.0% CAT, 18.9% and 23.7% in SOD and 17.8% and 22.7% in POD contents, respectively was observed in comparison to uninoculated control (with 1.5% PHCs but without BC) (Table ). Similarly, the application of bacterial consortia i.e., BC1 and BC2 in 3.0% PHCs contamination, showed a reduction of 13.5% and 22.6% in proline, 12.5% and 27.6% in CAT, 13.1% and 24.9% SOD, and 29.6% and 22.1% POD contents, respectively were examined as compared to respective control (with 3.0% PHCs but without BC). Nutrient acquisition in plants and removal of hydrocarbons from soil The toxic impacts of PHCs on chickpea plants was significantly ( P ≤ 0.05) decrease the N, P, and K contents of chickpea plants (Table ). The N, P, and K contents were decreased by 36.7% and 56.0%, 31.3% and 49.6% and 26.3% and 42.6%, under 1.5% and 3.0% PHCs contamination, respectively than that of uncontaminated control (without PHCs and BC). However, the treatments having bacterial consortia (i.e., BC1 and BC2) in the absence of PHCs, produced higher N (i.e., 12.6% and 18.3%), P (i.e., 15.1% and 17.8%) and K contents (i.e., 18.3% and 20.5%) than that of uninoculated and uncontaminated control (without BC and PHCs). However, in the case of 1.5% PHCs contamination and with the addition of BC1 and BC2, a decrease of 20.1% and 28.8% in N, 14.5% and 25.9% in P, and 22.8% and 24.4% in K contents, respectively was noticed than respective control (with 1.5% PHCs but without BC) (see Table ). Additionally, the application of bacterial consortia i.e., BC1 and BC2 in 3.0% PHCs contamination, showed 21.0% and 25.4% higher N, 12.3% and 16.9% higher P, and 18.2% and 27.7% higher K contents, respectively as compared to their respective control (with 3.0% PHCs but without BC). Overall, in the case of hydrocarbons removal, both BC caused a substantial reduction in PHCs contents in planted as well as non-planted treatments (Fig. ). Approximately, a loss of 32.0% and 22.0% of initially applied concentrations of PHCs at 1.5% and 3.0%, respectively was observed after 140 d in un-inoculated controls and unplanted treatments. However, the chickpea plant alone phytoremediate approximately 52.0% and 35.0% of initial concentrations of PHCs when grown in 1.5% and 3.0% PHCs, respectively (Fig. ). While the treatments containing different BC removed higher amounts of PHCs than that of their uninoculated controls (chickpea planted or non-planted treatments but without BC). Approximately, 32.0% and 59.0% PHCs removal (of initial concentrations) was observed in treatments containing BC1 and BC2, respectively in the absence of chickpea plants (at 1.5% PHCs). However, approximately, 72.0% and 76.0% PHCs removal (of initial concentrations) was observed in treatments containing BC1 and BC2, respectively in the planted treatment (at 1.5% PHCs) which is significantly higher as compared to unplanted controls (with BC). Likewise, about 45.0% and 52.0% PHCs removal (of initial concentrations) was observed in treatments containing BC1 and BC2, respectively in the presence of chickpea plants (at 3.0% PHCs) which is significantly higher than unplanted controls at 3.0% PHCs (35.0% for BC1 and 41.0% for BC2). Correlation matrix and principal component analysis To examine the association or connection between various growth, physiological, antioxidative, and nutritional characteristics of chickpea plants, a correlation analysis was done on mean values of the obtained data (Fig ). A strong positive correlation among physiological, growth, and nutritional characteristics (i.e., 0.87 to 0.99) was observed when BC was applied in uncontaminated soil. However, a strong negative correlation (i.e., -0.72 to -0.98) was seen between the growth, physiological, and nutrient attributes with antioxidant contents and/or petroleum hydrocarbon contamination. Moreover, a strong positive correlation was noticed between antioxidative contents and hydrocarbon contamination levels. The importance of different treatments with or without the presence of hydrocarbons was presented by PCA. A higher cumulative variation of 97.8% was shown by PC1 (95.2%) and PC2 (2.63%). Overall, a greater variation among applied treatments is illustrated in Fig . The toxic impacts of PHCs on chickpea plants was significantly ( P ≤ 0.05) decrease the N, P, and K contents of chickpea plants (Table ). The N, P, and K contents were decreased by 36.7% and 56.0%, 31.3% and 49.6% and 26.3% and 42.6%, under 1.5% and 3.0% PHCs contamination, respectively than that of uncontaminated control (without PHCs and BC). However, the treatments having bacterial consortia (i.e., BC1 and BC2) in the absence of PHCs, produced higher N (i.e., 12.6% and 18.3%), P (i.e., 15.1% and 17.8%) and K contents (i.e., 18.3% and 20.5%) than that of uninoculated and uncontaminated control (without BC and PHCs). However, in the case of 1.5% PHCs contamination and with the addition of BC1 and BC2, a decrease of 20.1% and 28.8% in N, 14.5% and 25.9% in P, and 22.8% and 24.4% in K contents, respectively was noticed than respective control (with 1.5% PHCs but without BC) (see Table ). Additionally, the application of bacterial consortia i.e., BC1 and BC2 in 3.0% PHCs contamination, showed 21.0% and 25.4% higher N, 12.3% and 16.9% higher P, and 18.2% and 27.7% higher K contents, respectively as compared to their respective control (with 3.0% PHCs but without BC). Overall, in the case of hydrocarbons removal, both BC caused a substantial reduction in PHCs contents in planted as well as non-planted treatments (Fig. ). Approximately, a loss of 32.0% and 22.0% of initially applied concentrations of PHCs at 1.5% and 3.0%, respectively was observed after 140 d in un-inoculated controls and unplanted treatments. However, the chickpea plant alone phytoremediate approximately 52.0% and 35.0% of initial concentrations of PHCs when grown in 1.5% and 3.0% PHCs, respectively (Fig. ). While the treatments containing different BC removed higher amounts of PHCs than that of their uninoculated controls (chickpea planted or non-planted treatments but without BC). Approximately, 32.0% and 59.0% PHCs removal (of initial concentrations) was observed in treatments containing BC1 and BC2, respectively in the absence of chickpea plants (at 1.5% PHCs). However, approximately, 72.0% and 76.0% PHCs removal (of initial concentrations) was observed in treatments containing BC1 and BC2, respectively in the planted treatment (at 1.5% PHCs) which is significantly higher as compared to unplanted controls (with BC). Likewise, about 45.0% and 52.0% PHCs removal (of initial concentrations) was observed in treatments containing BC1 and BC2, respectively in the presence of chickpea plants (at 3.0% PHCs) which is significantly higher than unplanted controls at 3.0% PHCs (35.0% for BC1 and 41.0% for BC2). To examine the association or connection between various growth, physiological, antioxidative, and nutritional characteristics of chickpea plants, a correlation analysis was done on mean values of the obtained data (Fig ). A strong positive correlation among physiological, growth, and nutritional characteristics (i.e., 0.87 to 0.99) was observed when BC was applied in uncontaminated soil. However, a strong negative correlation (i.e., -0.72 to -0.98) was seen between the growth, physiological, and nutrient attributes with antioxidant contents and/or petroleum hydrocarbon contamination. Moreover, a strong positive correlation was noticed between antioxidative contents and hydrocarbon contamination levels. The importance of different treatments with or without the presence of hydrocarbons was presented by PCA. A higher cumulative variation of 97.8% was shown by PC1 (95.2%) and PC2 (2.63%). Overall, a greater variation among applied treatments is illustrated in Fig . Hydrocarbons inclusion in soil reduced the growth, physiology and nutrients assimilation in chickpea plants. Nevertheless, the addition of bacterial consortia (BC1 and BC2) significantly uplifted the harmful impacts of PHCs on the physiological processes and development of chickpea plants grown in PHCs contaminated soil. Seedling emergence was decreased in the present study and the findings were matched with Grifoni et al. who also observed reduced seedling emergence when seeds were sown in oil-spilled soils. The reduced germination may be attributed to the light-weight vaporized fractions of PHCs that permeate the seed coat and cause embryonic death . Moreover, the hydrophobic nature of PHCs hinders the water intake and disturbs the imbibition . However, the inoculation of microbes in PHCs contamination alleviate the pollutant’s toxicity by the possible change in soil pH and C: N ratio . This might be due to exopolysaccharides secretions of bacterial strains that moisten the seed coat and promote early germination . Petroleum hydrocarbons’ existence in the soil leads to stunted plant growth and this was also observed in the present study. Basically, in soil, PHCs minimize the oxygenation by blocking the soil pores and resulting in anaerobiosis conditions for plant roots . The PHCs led to a decrease in root and shoot dry biomass and plant height, as already reported by Ali et al. . The possible cause might be cell injuries that restrict the intake of water and nutrients, and hinder the plant’s growth . The phytotoxic effects of PHCs negatively altered plant biochemical processes by changing cell membrane permeability, water potential and enzymatic disruptions (that cause the production of ROS, H 2 O 2 and ethylene), which subsequently affect plants’ metabolic pathways . The incorporation of mixed bacterial strains in this study resulted in an increased plant growth by mitigating the harmful effects of PHCs on plants. The microbes used in current study have already been identified as plant growth promoters and their details are already provided in our previous work . In both cultures, the presence of mixed microbes which are belonging to Achromobacter , Alcaligenes , Sphingobacterium and Stenotrophomonas possess the capability to produce growth hormones such as auxin, gibberellins, ACC deaminase, iron and phosphate solubilizing enzymes and oxidase to convert H 2 O 2 to H 2 O which strengthen the plants against negative impacts of PHCs stress as it was also shown by other studies . Plants physiology is also greatly influenced by hydrocarbons toxicity which lowers the levels of chlorophyll pigments and carotenoid contents in leaves . In stressed condition, the chloroplast absorb fewer photons from sun to accelerate the ETR to absorb the carrying energy . This may be because of the elevated H 2 O 2 levels, that damage the chloroplast membrane and thylakoid structure under stressed conditions . In case of the addition of bacterial consortia, the harmful impacts of PHCs on photosynthetic processes and chlorophyll synthesis has substantially been reduced . However, the inclusion of microbes halted the synthesis of H 2 O 2 and boosted the photosynthetic activities and the chlorophyll synthesis probably by lowering oxidative damage . In the present study, the alteration in cell membrane structure might be due to the volatilized fractions of PHCs that result in cell ruptures and affect the MSI and RWC of leaves . The PHCs also disturb the photosynthetic rates by reducing CO 2 and H 2 O assimilation, the prevailing conditions initiate ROS generation, which disturbs the plant’s physiology . The addition of both bacterial consortia in contaminated soil reduces the PHCs toxicity and strengthens the plants’ physiology. Under stress, the microbes produce ACC-deaminase that hydrolyze the ACC and minimizes the ethylene production in plants . In the present study, both cultures performed well in minimizing the negative impacts of PHCs on the physiology, growth and development of plants grown under PHCs stress, these results favorably support the findings of Ali et al. and Farooq et al. . In the current work, the proline and other antioxidant levels were also found higher in leaves that indicate the progress of stress related enzymes in PHCs contamination. According to Hasanuzzaman et al. , the proline is involved in maintaining cell structures, breaking down of free radicals, and stimulating the stress related intracellular metabolic processes. Therefore, the enhanced proline builds up in leaf tissues allowed stressed plants to balance their redox potential and control cellular metabolites levels . The oxidative stress in plants causes cell membrane damage and disrupt ionic homeostasis, which initiates the production of higher amounts of ROS in tissues . In general, proline mitigates these effects via lowering stress, regulating mitochondrial activity, balancing cellular division, strengthening membrane stability, and preventing electrolyte leakage from plant cells . This research found that stress-responsive enzymes like SOD and POD and CAT were highly active under PHCs stress. The antioxidative system protects plants against the damaging effects of contaminants . The SOD serves as the plant’s first line of defense against ROS created stress . Superoxide dismutase initially converts superoxide to H 2 O 2 and O 2 , then POD and CAT convert this H 2 O 2 to water and oxygen . Plants produce antioxidant enzymes that help plants against abiotic stresses . Here, it is found that the activity of POD, SOD, and CAT is changed when bacterial consortia were added in hydrocarbons contaminated treatments. Microbes, upregulate SOD activity and create melanin, a hormone used for scavenging free radicals . The chemical pathways for protein synthesis begin right after the recovery of superoxide by SOD activity . In the current study, the bacterial consortia (BC1 and BC2) performed well in PHCs stress, reducing the detrimental impacts of PHCs on plants via improving the stress responsive enzymes. In this study, PHCs reduced nutrient assimilation in chickpea plants. A change in soil pH and increased C: N ratio might be a possible reason behind lower N intake, as it was already presented by Xu et al. . Thus, N deficiency could subsequently affect the cellular enlargement and leaf growth by disturbing the amino acid formation and protein synthesis. In the similar way, P and K intake in plants was also decreased in PHCs stress . Since, the P and K control several metabolic processes including respiration, energy transfer, signal transduction, macro-molecules biosynthesis and photosynthesis, which are essentially required for plant growth . Though, the addition of bacterial consortia could significantly improve the nutrient uptake by plants probably due to secretions of stress reliever enzymes and/or breakdown of PHCs . In present study the PHCs removal was slower in earlier phase of incubation; however, a substantial increase in microbial growth and PHCs degradation was noticed with time. This could be explained by the slow lag phase of microbial growth as observed by Omrani et al. . In short, the indigenous microbial communities, present at sites 1 and site 5 were able to remediate a substantial amount of PHCs in liquid cultures. While other sites have relatively lesser potential to degrade PHCs. The considerable differences in the removal of PHCs by the microbes from the different sites could be due to the aging of PHCs and/or geographical effects . Because it has been observed that microbial communities from different origins have different potentials for hydrocarbon degradation . Microbes utilize PHCs as a source of carbon and energy, which was evident from the increase in OD 600 values of culture after incubation . According to reports, microbes have a special ability to use hydrocarbons via various enzymes and break them down into smaller and/or non-toxic substances . Microbial growth, estimated in terms of OD 600 , showed a significant increase with the removal of PHCs during the incubation period, indicating the growth potential of enriched microbes under PHCs contamination . In lateral stages, after three sequential transfers PHCs removal was further increased, the enhanced removal of PHCs could be due to the improved adaptation and selective enrichment of PHCs degraders . Overall, the contaminated sites harbor microorganisms that are capable of degrading hydrocarbons in liquid culture. These microbes (bacteria) were isolated from the enriched culture and mixed to construct bacterial consortia which were used in the rhizoremediation experiments. Various rhizoremediation experiments describe the importance of microbial communities (due to diversified enzymatic activities and adaptability) for better plant growth and PHCs remediation from polluted environments . The developed BC (in the present work) have the ability to remove a significant amount of hydrocarbons from soil in relatively short exposure time. The dual nature of selected microbes (to promote PHCs degradation and plant growth) remarkably improved the rhizo-degradation process. Moreover, plants can tolerate higher amounts of pollutants in the rhizosphere and can potentially phytoremediate a specific concentration of PHCs contaminants from the soil . Furthermore, the rhizoremediation was highest in the presence of added microbes. Increased PHCs removal from the soil might be possible due to the secretions of organic acids, protonation, release of chelators (chelation), chemical transformation, and phosphate mineralization by inoculated bacteria . This in turn increases the bioavailability of PHCs to plants and microbes . Here, in current work, the selected bacterial strains from genera; Achromobacter , Alcaligenes , Sphingobacterium , and Stenotrophomonas were used to construct two different bacterial consortia. The Alclaligene sp. secretes phosphatase enzymes to mineralize the fixed-P from soil and improves the rhizoremediation capability by enhancing plant growth. The Achromobacter denitrificans sp. are capable of degrading aromatic and long chain PHCs by the production of hexadecane monooxygenase and 2,3-dioxygenase . Arulazhagan et al. reported that the Stenotrophomonas maltophilia AJH1 can degrade volatile PHCs. Furthermore, the Sphingobacterium sp. KM-02, is also known for its biodegradation potential for volatile fractions of PHCs . Additionally, the Sphingobacterium ability to produce biosurfactants may enhance the bioavailability of PHCs and thus can accelerate the remediation process . Overall, hydrocarbon presence in soil exerted acute toxicity in plants, while the addition of two different BC not only improved the plant growth but also enhanced the nutrient uptake for strengthening the plant’s physiology. These microbes due to their diversified enzymatic properties removed a substantial amount of PHCs from soil and in that way reduced the risk of phytotoxicity in plants. In general, the BC2 showed better growth and hydrocarbons degradation rather than BC1. So, the native BC could be used for plant growth promotion and rhizoremediation purposes in soil contaminated with organic pollutants including PHCs. The purpose of this research was to evaluate and compare the efficacy of two constructed bacterial consortia in reducing hydrocarbon stress and boosting chickpea plants in hydrocarbon-contaminated soil. The presence of PHCs inhibited the development of chickpea plants. More importantly, the microbes reduced the phytotoxic impacts of PHCs on chickpea growth and thus enhanced plant growth, most likely through antioxidative defense and plant growth-promoting properties. Additionally, the application of bacterial consortia with attributes that promote plant growth significantly boosted the removal of PHCs from soil and enhanced the capability of chickpea plants to eliminate PHCs. Largely, both bacterial consortia were quite successful in their employment; however, BC2 performed better than BC1 in promoting plant growth and in degrading higher amount of PHCs. Based on our results, the unique bacterial consortia may be a promising contender for future phytoremediation efforts due to their ability to aid in PHCs degradation and boost plant growth in PHCs contaminated soil. Below is the link to the electronic supplementary material. Supplementary Material 1
Involving patients as research partners in research in rheumatology: a literature review in 2023
35a8e5ff-5afb-485b-85b3-0e2bbdbaeebb
10668287
Internal Medicine[mh]
The active involvement of patients as research partners (PRPs) brings valuable knowledge and unique perspectives to medical research and is recommended by European Alliance of Rheumatology Associations. However, the implementation of these recommendations in rheumatology research is unknown. PRP involvement in randomised trials remains low and is lacking in translational research projects in rheumatology. Our study identifies areas of rheumatology research (randomised controlled trials and translational research) where the involvement of PRP needs to be improved. The findings highlight the importance of raising awareness of the benefits of involving PRPs in research and of providing resources and training for researchers and PRPs to facilitate their involvement. It is well established that the active involvement of patients as patient research partners (PRPs) brings valuable knowledge and unique perspectives to medical research. The knowledge, personal experiences and understanding of PRPs of their disease and the care they receive, provide invaluable information that complements and enriches scientific understanding of the diseases. This perspective bridges the gap between research and real-life experiences of patients, making research more applicable and effective. Government-supported initiatives, such as the US Patient-Centred Outcomes Research Institute (PCORI) in 2010, have moved patients from research subjects to contributors to the research agenda. Recognising the benefits of PRPs involvement in research, the Food and Drug Administration (FDA) as well as the European Medicine Agency (EMA), have encouraged their involvement in medical research projects. In rheumatology, the European Alliance of Rheumatology Associations (EULAR) published recommendations in 2011 advocating the inclusion of PRPs in research projects at all phases of the project. Despite this initiative, it has been reported that PRPs were involved in only 2% of rheumatology randomised clinical trials (RCTs) published between 2016 and 2020. Whether patient involvement in RCTs has increased since then, remains unknown. In translational research, it has also been shown that involvement of PRP is beneficial for research, patients and researchers. A scoping review in 2021 identified 32 studies engaging patients in preclinical laboratory research. However, most of the studies concerned non-rheumatological disorders. Therefore, the extent of PRPs in translational research in rheumatology remains largely unexplored. We aimed to assess the involvement of PRP in research in rheumatology by performing a scoping review of recently published RCTs and translational studies. We performed a scoping review, which allowed us to obtain an overview of the current state of patient involvement in research in rheumatology. MEDLINE and EMBASE databases were searched on 1 March 2023 using the following keywords: ‘systemic/epidemiology’ OR ‘systemic/aetiology’ OR ‘systemic/immunology’ OR ‘systemic/metabolism’ OR ‘systemic/pathology’ OR ‘systemic/physiopathology’ (MeSH Terms) (for translational studies) and ‘clinical trial’ (MeSH Terms) (for RCT). The search was performed in four rheumatic diseases (rheumatoid arthritis, psoriatic arthritis, systemic lupus erythematosus and lower extremity osteoarthritis) using the following keywords ‘arthritis, rheumatoid’ OR ‘lupus erythematosus, systemic’ OR ‘arthritis, psoriatic’ OR ‘osteoarthritis, knee’ OR ‘osteoarthritis, hip’ OR ‘osteoarthritis’. The review was performed by two qualified specialists in rheumatology with experience in translational research and clinical studies (ME and DB). No language restriction was applied. In cases of uncertainty about PRPs’ involvement, the other members of the team were consulted. The 80 most recent articles (40 for translational studies and 40 for RCTs) (20 for each of the target diseases, ie, 10 RCTs and 10 translational studies per disease), published up until 1 March 2023, in rheumatology and general scientific journals with an impact factor>5, were included. General study features such as study design, disease, geographical location of the study and funding source were collected. The presence and type of PRP involvement were assessed after evaluation of the ‘Methods’, ‘Author affiliations’ and ‘Acknowledgements’ sections. In the case of PRP involvement, we collected data concerning the number and the phase of the study in which PRPs were involved, the way of recruitment and the provided training. Data analysis was descriptive. A total of 221 studies were initially screened; most of them were excluded due to inappropriate study design or disease focus (flowchart in ). Half of the translational studies were published in rheumatology journals. Geographically, 50% of the studies were conducted in Asia, 30% in Europe, and 20% in North America. Most of the studies (77%) had a local setting and 75% were non-industry funded. None of the translational studies involved PRPs . However, one study mentioned patient engagement for the dissemination of results to patients and the general public, which is not usually considered a research partnership. RCTs were predominantly published in rheumatology journals (78%) and were conducted in North America (52%), Europe (25%) and Asia (23%). Among the 40 RCTs, eight (20%) reported PRP involvement (three on rheumatoid arthritis, three on lower extremity osteoarthritis, one on systemic lupus erythematosus and one on psoriatic arthritis) . The trials reporting PRP involvement were mainly from Europe (6/8, 75%) and North America (2/8, 25%) with the majority of them non-industry funded (6/8, 75%). The phase of the study in which PRPs were involved was specified in six of the RCTs, with all reporting PRP participation in the study design or design of the intervention and two of them also reporting involvement in the interpretation of the results. Specifically, in two of the studies, PRPs were involved in the process of designing the intervention (a website for patient education in both cases). All the trials reporting the number of PRPs (6/8, 75%) involved at least two PRPs (four RCTs involved two PRPs, one involved four PRPs and one involved five PRPs). Only one study reported gender of the involved PRPs (‘1 female and 1 male’). No other information on PRP diversity was reported. Two studies mentioned that PRPs were recruited through patient organisations. One study reported training for PRPs for the study. Further information is reported in . In this scoping review of recent published studies in rheumatology research, PRPs were involved in 20% of the RCTs and in none of the translational studies. These findings underline a significant gap between recommendations and actual practice. This is the first study assessing involvement of PRPs in translational studies and we showed that not a single study reported PRP involvement. This is a significant concern, as translational research is an important step in bringing laboratory findings to clinical practice. Involvement of PRPs at this stage can ensure that the research is grounded in patient experiences and needs, thereby improving its applicability and potential impact on patient care. The reasons behind this lack of involvement need to be explored, and strategies need to be developed to promote PRP inclusion in translational research. There has been a modest improvement in PRP involvement in RCTs, rising from 2% in trials published 2016–2020 to 20% in the present review. This finding may suggest that there is a growing recognition of the value of PRP contributions in clinical trials. However, there is still considerable room for improvement. Given that RCTs form the backbone of evidence-based medicine, it is crucial that PRPs are adequately consulted to ensure that research outcomes are relevant and meaningful to patients. In addition, whether this increase in PRP involvement in RCTs reflects a form of tokenism or corresponds to genuine participation has not been investigated and should be further studied. This persistent gap in the implementation of the 2011 recommendations on the involvement of PRP in research needs to be set against the now-demonstrated benefits of involving PRPs in research and clinical trials. First, patient’s involvement has been shown to improve the quality and relevance of research, as well as the implementation and dissemination of study results. In addition, the participation of PRPs improves the recruitment of participants in clinical trials especially if it includes people with lived experience of the health condition under study. Furthermore, a PRP intervention specifically targeting retention (ie, the use of lay community health advisers to support participants) was significantly associated with a lower drop-out rate. These observations underline the central role of patient participation and reaffirm that it is not merely a ceremonial inclusion, but an essential element that amplifies the power and relevance of research. Interestingly, most of the trials involving PRPs were conducted in Europe and North America versus none in Asia. This raises questions about potential regional differences in the recognition and implementation of PRP involvement in rheumatology research. Similarly, in another study, articles from the UK were 10 times more likely to include PRPs than those from China. This geographical trend may be influenced by the changing research landscape in these regions, where patient involvement is increasingly recognised as an essential component of holistic research in Europa. European institutions and initiatives, for example, have actively encouraged public and patient participation in healthcare research, which may explain the greater representation of PRP involvement in studies in this region. Similarly, there appears to be a convergence in the way PRPs are conceptualised in Europe, with similar attitudes and approaches in different European countries. In addition, public funding can encourage the inclusion of PRPs: for example, the UK NIHR has for several years included in its application process a section in which applicants can either apply for funding to include PRPs, or justify why they are not doing so. In Asia, it has been shown that patient involvement and participation in Chinese hospitals is influenced by factors such as the level of involvement of civil society, the move towards commercialisation and the cultural resources of individuals. Furthermore, in terms of political context, China lacks political support for involving patients in healthcare. Thus, efforts to enhance PRP involvement should consider the local cultural context and work with local patient advocacy groups to foster a culture that supports patient engagement in research. In addition, it should be noted that the majority of studies reporting PRP involvement were not funded by industry. One possible explanation could be the intrinsic nature of grants and public funding, for example, the UK NIHR, which often prioritise or even mandate patient and public involvement, ensuring that researchers incorporate PRP views from the proposal stage. Conversely, industry-funded studies may face a different set of challenges, such as legal restrictions, proprietary concerns or time constraints that could discourage PRP involvement. It would be essential for future research to delve deeper into these nuances to understand the dynamics between funding sources and PRP involvement. Although our study sought to provide an overview of the involvement of PRP in rheumatology research, our approach may have inherent limitations. First, it is possible that important studies have been missed due to the use of a scoping review. However, our aim was not to analyse the most important articles in the field, but rather the most recent ones, in order to provide an overview of the current state of PRP involvement in rheumatology research and to identify any gaps. Furthermore, our review focuses on four prespecified rheumatic diseases and does not explore the involvement of PRP in all rheumatological disorders. However, to provide a comprehensive overview, we have decided to focus on four frequent rheumatic diseases, of varied aetiology and dynamic in terms of research. Further specific studies will be needed to confirm our findings in other specific rheumatic diseases. In addition, the methods used to determine the involvement of PRPs were based on the data reported in the articles; it is possible that some studies included an involvement of PRPs that was not explicitly mentioned. However, EULAR recommends acknowledgement of PRP in published studies, and the lack of mention of PRP in the study may indicate a form of tokenism rather than genuine involvement. Despite greater patient involvement in rheumatology research, particularly in the development of guidelines and in international research networks, the involvement of PRPs is not similar in all areas of rheumatology research. Our study highlights persistent gaps in RCTs and translational research, where PRP involvement remains low. To address these disparities, concerted efforts are needed from all stakeholders, including researchers, funding bodies and patient advocacy groups. There is a need to raise awareness about the benefits of PRP involvement, provide resources and training to facilitate their engagement.
Sexual and reproductive health communication between parents and high school adolescents in Vientiane Prefecture, Lao PDR
6e4b5368-ecf3-4f01-965e-cc6df4de25c6
7480649
Health Communication[mh]
Adolescent health has become a priority on the global agenda with an added focus on addressing adolescent health in low- and middle-income countries if the Sustainable Development Goals (SDGs) are to be reached . Adolescence is defined as the period of life between 10 and 19 years of age . It represents a period of life characterised by significant physical, cognitive, emotional and social changes as children transition to adulthood . Traditionally in Lao PDR, as in many Asian cultures, sexual behaviour before marriage has been considered highly inappropriate, with parents emphasising abstinence , but nevertheless, many adolescents are sexually active while unmarried or out-of-union. Lao PDR, a lower-middle income country in South-East Asia, has one of the highest adolescent pregnancy rates among countries in the region; at 83 per 1,000 adolescent girls aged 15–19 being pregnant . Early childbearing is common with about 3.6% of women giving birth by age 15 , and more than 1 in 10 girls aged 15–19 have begun childbearing . In addition, abortion is common in Vientiane, with over 20% of sexually active young women reported to have had an abortion, often in unsafe conditions . Additionally, the new cases of HIV among youths aged 15–24 years in Vientiane were 16.7%, 18.8% and 15.9% in each respective year for 2010, 2011 and 2012 . Parent-adolescent sexual communication can positively affect safer sexual behaviour among adolescents, including utilisation of modern contraceptive methods. Through discussion of adolescent sexual activity, parents can influence on adolescents’ sexual attitudes, values and beliefs that help reduce sexually risky behaviour . For parent-adolescent sexual communication to be effective however, parents need to be able to communicate openly with and give accurate and correct advice about SRH to their children as they become more sexually aware and active . Additionally, adolescents need to feel comfortable when discussing reproductive health with their parents including talking about the physical changes at puberty, the menstrual cycle, wet dreams, birth control pills and condom use . When adolescents and partners feel uncomfortable or not empowered to engage in open discussion about sexual health, they may become involved in risky sexual behaviour at an early age, which may result in unwanted pregnancies, abortion (sometimes in unsafe conditions) as well as exposure to sexually transmitted infections (STI) including HIV/AIDS . Despite the potential benefits of parent-adolescent sexual communication , studies have found many adolescents have not discussed sexual topics with a parent. Studies have shown that only 20%, 36.9% and 30.6% of adolescents respectively in Lesotho, Ethiopia and China had discussed SRH issues with their parents . The barriers to discussing these topics with parents for adolescents included parents’ lack of knowledge, negative attitudes and socio-cultural taboos . SRH communication may protect adolescent children from SRH issues, but there is little research on what helps adolescents initiate the conversation about SRH matters with their parents . The significance of this study is that it will help to design appropriate interventions to improve open discussions with parents to prevent different adverse SRH issues for adolescents. Therefore, this research aimed to study adolescent-parent communication on SRH issues and determine factors associated with general and SRH communication between adolescents and their parents in Vientiane. Study design and setting This study employed a cross-sectional design with a multi-stage sampling technique. First, it selected one school from 14 public high schools in four urban districts (Chanthabouly, Sikhottabong, Sisattanak and Xaysettha) and one school from five public high schools in one rural district (Sangthong) by random sampling due to limited time and budget, with one urban-based High School and one rural-based High School selected. Next, proportional sampling was used to select the number of classes in each grade. Finally, all students in the selected classes were invited to participate in the study. Participants The population size was 950 adolescent students in grades 9 to 12 at one High School in urban and the one High School in rural area. The sample size was 384 which was determined using the single population proportion formula when considering the following assumptions p = 0.5 (it was hypothesised that the percentage frequency of outcome in the population was 50% for the estimated proportion of students communicating on SRH issues with parents due to unknown the proportion of the proportion of SRH communication between adolescents and parents in Lao PDR); d = 0.05 (a confidence level of 95% and a margin of error of 5%); Z (∝/2) = 1.96 (a significance level of 5% (α = 0.05)). To this was added a 10% non-response rate and design effect of 1.3. Measurements The independent variables included four sections: (1) socio-demographic factors (age, grade, sex, school location, parent’s marital status, living arrangement, fathers and mothers age, education and occupation), (2) the attitudes of students, (3) influence person (grandparent, father, mother, sister, brother, teacher, peer, neighbour, health staff) and (4) SRH information sources (school, health facility, youth centre, internet, TV, radio, newspaper, magazine). The questionnaire about attitudes towards general and SRH communication was validated according to the Parent–Adolescent Communication Scale and translated into Lao. Each question was answered on a four-point Likert Scale of attitude statements ranging from 1) strongly disagree, 2) disagree, 3) agree and 4) strongly agree. However, there were four negative questions which were answered with 4) strongly disagree, 3) disagree, 2) agree and 1) strongly agree. The attitudes towards general communication with parents constituted 15 questions. The scale of reliability coefficient for the attitudes towards general communication with parents was 0.816. A summated composite score was calculated with a minimum score of 15 and a maximum of 60. The actual scores were classified base on Bloom’s criteria and modified into two groups as follows: scores of 15–27 (<60%) were regarded as the respondents indicating a negative attitude and scores of 28–60 (>60%) were regarded as their displaying a positive attitude. The attitudes towards SRH communication with parents included 22 questions. The scale of reliability coefficient for the attitudes towards sexual and reproductive health communication with parents was 0.808. A summated composite score had a minimum score of 22 and a maximum score of 88. Actual scores were classified base on Bloom’s criteria and modified into two groups as follows: scores of 22–39 (<60%) were regarded as the respondents indicating a negative attitude and scores of 40–88 (>60%) were regarded as their showing a positive attitude. The dependent variable for the questionnaire was measured by the frequency communication between adolescents and parents on SRH issues by incorporating the eight questions related to SRH communication with the father and eight questions related to SRH communication with the mother. This was assessed using a Likert Scale of eight items/topics with responses ranging from 0) never = never per six months, 1) rarely = 1 time per six months, 2) sometimes = 2 times per six months to 3) often more than 2 times per six months which was derived from the weighted Topics Measure of Family Sexual Communication Scale . The items included physical changes at puberty, menstrual cycles/wet dreams, premarital sex, multiple sex partners, unwanted pregnancies, birth control pills, condom use, STDs/HIV/AIDS. Responses to each of these eight items were classified into two groups: No = (never and rarely) = 0 and Yes = (sometimes and often) = 1. Then, a summated composite score was produced through totalling all items, and the Cronbach’s alpha for the internal consistency was 0.903. For the purpose of this analysis, parent-adolescent SRH communication was regarded as existent if the adolescents and their parents discussed at least four of eight SRH topics sometimes or frequently . Data collection The data collection team included eight people who were trained on the objectives of the study, the content of the questionnaire and how to conduct the study in a high school setting. After that, the team visited each selected school and identified eligible students. Two research assistants were allocated per class and explained the purpose of the research to the students and the content of each section of the questionnaire. The students completed the self-administered questionnaire, and the research assistants checked the completeness of all answers. If there were missing answers, the research assistants returned the questionnaire to complete the missing questions. After checking all questionnaires, the students were allowed to leave the classrooms. Statistical analysis The data were entered and cleaned using Epi Data software version 6.0. The data were analysed using STATA version 14.2. Descriptive statistics was used to describe the numbers and percentages of the dependent and independent variables. Then, the bivariate analysis was performed between the independent variables and the dependent variable. The variables in the bivariate analysis found to be significant at p < 0.05 were entered into the multivariable logistic regression model. In the multivariate analysis, standard data analysis techniques were applied. Variables having p < 0.05 in the multivariate analysis were taken as significant predictors. Crude and adjusted odds ratios with their 95% confidence intervals were calculated and presented in texts and tables. This study employed a cross-sectional design with a multi-stage sampling technique. First, it selected one school from 14 public high schools in four urban districts (Chanthabouly, Sikhottabong, Sisattanak and Xaysettha) and one school from five public high schools in one rural district (Sangthong) by random sampling due to limited time and budget, with one urban-based High School and one rural-based High School selected. Next, proportional sampling was used to select the number of classes in each grade. Finally, all students in the selected classes were invited to participate in the study. The population size was 950 adolescent students in grades 9 to 12 at one High School in urban and the one High School in rural area. The sample size was 384 which was determined using the single population proportion formula when considering the following assumptions p = 0.5 (it was hypothesised that the percentage frequency of outcome in the population was 50% for the estimated proportion of students communicating on SRH issues with parents due to unknown the proportion of the proportion of SRH communication between adolescents and parents in Lao PDR); d = 0.05 (a confidence level of 95% and a margin of error of 5%); Z (∝/2) = 1.96 (a significance level of 5% (α = 0.05)). To this was added a 10% non-response rate and design effect of 1.3. The independent variables included four sections: (1) socio-demographic factors (age, grade, sex, school location, parent’s marital status, living arrangement, fathers and mothers age, education and occupation), (2) the attitudes of students, (3) influence person (grandparent, father, mother, sister, brother, teacher, peer, neighbour, health staff) and (4) SRH information sources (school, health facility, youth centre, internet, TV, radio, newspaper, magazine). The questionnaire about attitudes towards general and SRH communication was validated according to the Parent–Adolescent Communication Scale and translated into Lao. Each question was answered on a four-point Likert Scale of attitude statements ranging from 1) strongly disagree, 2) disagree, 3) agree and 4) strongly agree. However, there were four negative questions which were answered with 4) strongly disagree, 3) disagree, 2) agree and 1) strongly agree. The attitudes towards general communication with parents constituted 15 questions. The scale of reliability coefficient for the attitudes towards general communication with parents was 0.816. A summated composite score was calculated with a minimum score of 15 and a maximum of 60. The actual scores were classified base on Bloom’s criteria and modified into two groups as follows: scores of 15–27 (<60%) were regarded as the respondents indicating a negative attitude and scores of 28–60 (>60%) were regarded as their displaying a positive attitude. The attitudes towards SRH communication with parents included 22 questions. The scale of reliability coefficient for the attitudes towards sexual and reproductive health communication with parents was 0.808. A summated composite score had a minimum score of 22 and a maximum score of 88. Actual scores were classified base on Bloom’s criteria and modified into two groups as follows: scores of 22–39 (<60%) were regarded as the respondents indicating a negative attitude and scores of 40–88 (>60%) were regarded as their showing a positive attitude. The dependent variable for the questionnaire was measured by the frequency communication between adolescents and parents on SRH issues by incorporating the eight questions related to SRH communication with the father and eight questions related to SRH communication with the mother. This was assessed using a Likert Scale of eight items/topics with responses ranging from 0) never = never per six months, 1) rarely = 1 time per six months, 2) sometimes = 2 times per six months to 3) often more than 2 times per six months which was derived from the weighted Topics Measure of Family Sexual Communication Scale . The items included physical changes at puberty, menstrual cycles/wet dreams, premarital sex, multiple sex partners, unwanted pregnancies, birth control pills, condom use, STDs/HIV/AIDS. Responses to each of these eight items were classified into two groups: No = (never and rarely) = 0 and Yes = (sometimes and often) = 1. Then, a summated composite score was produced through totalling all items, and the Cronbach’s alpha for the internal consistency was 0.903. For the purpose of this analysis, parent-adolescent SRH communication was regarded as existent if the adolescents and their parents discussed at least four of eight SRH topics sometimes or frequently . The data collection team included eight people who were trained on the objectives of the study, the content of the questionnaire and how to conduct the study in a high school setting. After that, the team visited each selected school and identified eligible students. Two research assistants were allocated per class and explained the purpose of the research to the students and the content of each section of the questionnaire. The students completed the self-administered questionnaire, and the research assistants checked the completeness of all answers. If there were missing answers, the research assistants returned the questionnaire to complete the missing questions. After checking all questionnaires, the students were allowed to leave the classrooms. The data were entered and cleaned using Epi Data software version 6.0. The data were analysed using STATA version 14.2. Descriptive statistics was used to describe the numbers and percentages of the dependent and independent variables. Then, the bivariate analysis was performed between the independent variables and the dependent variable. The variables in the bivariate analysis found to be significant at p < 0.05 were entered into the multivariable logistic regression model. In the multivariate analysis, standard data analysis techniques were applied. Variables having p < 0.05 in the multivariate analysis were taken as significant predictors. Crude and adjusted odds ratios with their 95% confidence intervals were calculated and presented in texts and tables. In total, 384 students aged 14–17 in grades 9 to 12 were enrolled into the study. About 63% of the respondents were from an urban district and slightly higher than half were females, in addition the mean age of respondents was 15.7. The reported mean age of their fathers was 47.2 and 40.6% of their fathers had graduated from the tertiary education level. Their fathers were mostly employed as government staff or private businesses. In addition, the mean age of their mothers was 42.6 and 41.9% of mothers had graduated from high school or vocational college. About 27.1% of mothers were housewives . Frequency of adolescent–parent communication on SRH issues About 21.3% of students in this study had discussed at least four topics in eight topics for SRH issues with their parents during the six months prior to the survey. Of these students, male adolescents discussed the topics more frequently with their parents than female adolescents (29.2% versus 16.1%). Moreover, male participants also discussed SRH issues with their father and their mothers more often than female participants. These results highlight that boys talk more than girls to both their parents about SRH issues. However, more adolescent boys and girls talk with their mothers than with their fathers . Outside of the family, adolescents wanted to discuss with health staff on SRH issues comprised about 59.4%, followed by with peers (47.9%) and with their teachers (32.8%). The main reason that adolescents wanted to discuss SRH issues with health staff was that health staff had more knowledge of sexual and reproductive health than them, but the highest percentage of the reason for why adolescents chose someone for discussion was that they were always a good listener for them in SRH issues. Attitudes of students towards general and SRH communication with parents About 70.8% of both male and female students had positive attitudes towards and accepted the importance of maintaining discussion about general matters with their parents and more than half of adolescents agreed that their parents were good listeners. Moreover, 94.1% of students had positive attitudes towards and accepted the importance of discussing SRH issues with their parents, meaning that almost all adolescents were concerned about SRH problems and wanted to consult their parents. However, the frequency of SRH communication between parents-adolescents was low . SRH information sources The main sources of SRH information for adolescents were health facilities because they received good information from them. In addition, the internet was cited as the second most important source of SRH information due to the fact that the internet was easy to access for SRH information . Factors associated with SRH communication between students and their parents The multivariate logistic regression model showed that being a male adolescent (AOR = 2.1; 95% CI 1.2 to 3.5), urban location (AOR = 0.2; 95% CI 0.1 to 0.5), having a mature father (AOR = 1.7; 95% CI 1.0 to 2.9), positive attitudes towards general communication with parents (AOR = 2.2; 95% CI 1.1 to 4.2) and accessibility to many sources of SRH information (AOR = 5.2; 95% CI 2.4 to 11.4) were significantly associated with adolescent-parent communication on SRH issues (p < 0.05) . About 21.3% of students in this study had discussed at least four topics in eight topics for SRH issues with their parents during the six months prior to the survey. Of these students, male adolescents discussed the topics more frequently with their parents than female adolescents (29.2% versus 16.1%). Moreover, male participants also discussed SRH issues with their father and their mothers more often than female participants. These results highlight that boys talk more than girls to both their parents about SRH issues. However, more adolescent boys and girls talk with their mothers than with their fathers . Outside of the family, adolescents wanted to discuss with health staff on SRH issues comprised about 59.4%, followed by with peers (47.9%) and with their teachers (32.8%). The main reason that adolescents wanted to discuss SRH issues with health staff was that health staff had more knowledge of sexual and reproductive health than them, but the highest percentage of the reason for why adolescents chose someone for discussion was that they were always a good listener for them in SRH issues. About 70.8% of both male and female students had positive attitudes towards and accepted the importance of maintaining discussion about general matters with their parents and more than half of adolescents agreed that their parents were good listeners. Moreover, 94.1% of students had positive attitudes towards and accepted the importance of discussing SRH issues with their parents, meaning that almost all adolescents were concerned about SRH problems and wanted to consult their parents. However, the frequency of SRH communication between parents-adolescents was low . The main sources of SRH information for adolescents were health facilities because they received good information from them. In addition, the internet was cited as the second most important source of SRH information due to the fact that the internet was easy to access for SRH information . The multivariate logistic regression model showed that being a male adolescent (AOR = 2.1; 95% CI 1.2 to 3.5), urban location (AOR = 0.2; 95% CI 0.1 to 0.5), having a mature father (AOR = 1.7; 95% CI 1.0 to 2.9), positive attitudes towards general communication with parents (AOR = 2.2; 95% CI 1.1 to 4.2) and accessibility to many sources of SRH information (AOR = 5.2; 95% CI 2.4 to 11.4) were significantly associated with adolescent-parent communication on SRH issues (p < 0.05) . Parents are important role models in adolescents’ lives. They can directly or indirectly transmit values, traditions and lifestyles to their children. Positive family communication helps adolescents develop the values, security, and sense of worth that can lead to healthy decision-making including around sexual health . Issues related to SRH can be difficult topics to discuss within the family. The purpose of this study was to assess the high school adolescents’ communication with their parents about sex and reproductive health, the frequency of this communication and the responses to SRH topics from the adolescents’ perspective as expressed in the questionnaire. In this study, less than 1 in 4 adolescents discussed SRH issues regularly with their parents, which indicates that SRH issues may be difficult for parents and adolescents to discuss. The reasons for this in this study are not clear but are likely to include embarrassment, lack of confidence or socio-cultural norms that make sexual activity a taboo topic . This is of concern as adolescents are more likely to make informed, less risky decisions about their sexual health when they have access to appropriate and timely advice, with parents being an influential source of information for their adolescent children . This study indicated that both male and female adolescents were more comfortable discussing SRH issues with mothers than with fathers. This might be mothers were also perceived to be better at listening than fathers. Fathers may find it harder to discuss potentially embarrassing relational topics and have an open conversation about sexual health with their adolescent children and may need further support in initiating such conversations . Unlike a study in southern Ethiopia , the present study showed that male adolescents were more likely than female adolescents to have discussed SRH issues with their parents. This might be because boys feel less embarrassment than girls when discussing SRH. It may also be because male adolescents stay at home less and go to bars and nightclubs, meaning parents may wish to talk more to their sons about contraceptive use, to prevent unwanted pregnancies . Additionally, parents may not like to think of their adolescent daughters having sex, and thus may feel embarrassed to talk about sex or may fear that talking about safe sex will promote promiscuous behaviour. Adolescents with older fathers were more likely to discuss SRH issues as found in a previous study in Unguja-Zanzibar, Tanzania . This may be due to the fact that older parents were more experienced in communicating and were more open to talking with their adolescent children than younger parents. Adolescents who have positive attitudes about communication with their parents were more likely to discuss SRH issues than those who had negative attitudes towards such communication. More than half the respondents identified that it was important to discuss SRH issues with their parents, particularly if they are good listeners. Thus, developing good listening skills is likely to motivate sons and daughters to talk with their parents , and feel more comfortable and confident to talk to their parents about SRH issues. In southern Ethiopia, students who perceived the importance of discussing SRH issues with their parents were more likely to do so compared to those who did not perceive the importance of such discussion . The study also showed that adolescents who received information about SRH from multiple sources were more likely to discuss these issues with their parents. Similarly, a study in northwest Ethiopia mentioned that students who had obtained SRH information were more likely to communicate about SRH issues with their parents than those who did not have SRH information. This could be explained by the fact that the respondents have some awareness and might be more eager to communicate on SRH issues and the information they received might prepare them to begin that communication . Adolescents attended schools in an urban area where we're less likely to communicate with parents on SRH issues than those in the rural area. One reason might be in rural settings where early marriage and childbearing are more common, parents have more liberal attitudes. As with all studies, this study has limitations. Firstly, the sample selected only one rural and one urban school so it may not represent Lao PDR population overall. Secondly, adolescents in this study were limited to middle adolescents, aged 14 to 17. The other issue was that the responses from adolescents might be different from what their parents might really have perceived. Thirdly, our measure of sexual communication focused on several sexual risk and protective behaviours but did not assess many other topics that theorists and scholars have described as being part of sexual health such as intimacy and sexual pleasure. In addition, recall bias might have occurred as the SRH communication might or might not have happened: remembering their experiences of SRH communication with their parents during the previous six months may be inaccurate, so the adolescents’ responses might be over reported or under reported. Fourthly, the study did not capture the parents’ perspectives; however, anecdotal evidence suggested that adolescents’ perspectives are valid in parent-adolescent sexual communication. Parent-adolescent sexual and reproductive health communication was very limited and associated with sex of adolescent students, school location, attitudes of adolescents, younger age of fathers and information sources. Our findings emphasised the need to improve SRH communication for adolescents and parents in Lao PDR. Therefore, sex education should improve SRH communication skills in all schools, especially in student populations that have a high percentage of younger fathers. In addition, the programmes should involve multiple sectors like health, education and youth services to provide many sources of information to change the negative attitudes of students towards SRH communication with parents to positive ones. Moreover, parents should focus on both male and female students equally in discussing SRH issues. In particular, parents should discuss SRH issues with girls because they will bear the burden of unplanned pregnancies. Further studies should be carried out from the parents’ perspective to identify factors that affect the discussion of SRH issues between students and parents in other provinces of Laos. Qualitative research should be more in-depth in relation to the SRH topics discussed between parents and adolescents and the barriers to SRH communication between parents and adolescents.
Testing whether barriers to a hypothetical screening test affect unrelated perceived benefits and vice versa: A randomised, experimental study
38b31f35-9b82-4434-9dda-dfefe0edb9c6
5332122
Preventive Medicine[mh]
Introduction Screening is an important public health strategy for reducing cancer mortality and incidence. There is potential to improve population health by increasing uptake of available screening tests but people’s willingness to undergo them typically requires them to accept some short-term burden and some level of risk in exchange for a degree of potential health benefit in the relatively distant future. Much informative research has been carried out on how invitees perceive benefits and barriers of screening in order to address the policy goal of improving uptake (and satisfaction with screening services in general). Studies in this area have often been guided by psychological theories which assume implicitly that perceptions of barriers and benefits are independent. For example, the Health Belief Model includes benefits and barriers as discrete ‘constructs’ that are often analysed separately (e.g. , ). Similar conceptual and analytical approaches are also apparent in less theoretically-oriented research (e.g. where perceived barriers are examined without assessment of perceived benefts ). However, this assumption may not be not true; appraisals of barriers may be less negative when benefits are high vs. when they are low (and likewise for perceptions of benefits when barriers are low vs. when they are high), even when those benefits are objectively unrelated. Previous research provides several theoretical bases for this hypothesis. Most notably, much research has found evidence that perceptions can be systematically ‘irrational’ in the context of evaluating whether to carry out a given health-related behaviour. For example, one cognitive shortcut known as the ‘affect heuristic’ suggests that individuals do not necessarily carry out separate appraisals of the favourable and unfavourable characteristics of a behaviour and evaluate the balance. Instead, both aspects are evaluated together, in the context of a shared ‘pool’ of feeling or emotion (i.e. ‘affect’). That is, where an affective response towards a behaviour is positive, desirable characteristics (e.g. health benefits) are judged to be high and aversive characteristics (e.g. risks or barriers) are judged to be low, whereas the opposite applies if the affective response is negative , . Affect may also lead to interrelatedness in other ways, such as through directing attention to particular information: positive feelings towards screening may increase the extent to which benefits are focused on and decrease the extent to which barriers are considered . There are various other rationales for this hypothesised interaction, some of which are more cognitive in nature, such as halo effects (in which characteristics of a behaviour are evaluated in terms of general attitudes towards it) and efforts to maintain cognitive consistency (i.e. people may attempt to avoid ‘incompatible’ views where favourable aspects of a behaviour are seen as positive while unfavourable aspects would be seen as negative simultaneously) . Irrespective of the psychological underpinnings, empirical evidence provides some support for this hypothesis; cross-sectional studies have often found that perceptions of screening test benefits and barriers are negatively correlated , , , . However, to our knowledge, no experimental studies have tested this hypothesis of interrelatedness directly, meaning that their applicability to screening policy is limited. It is important to investigate this relationship because efforts to improve screening uptake based on addressing invitees’ stated barriers will have limited success if they are proxies for negative perceptions regarding other aspects of screening. This study used an experimental design to test whether modifying test barriers affected perceptions of conceptually unrelated benefits, and vice versa. Participants were allocated at random to receive information regarding a screening test with high or low benefits, and high or low barriers, in the context of a hypothetical disease with similarities to cancer. Perceived benefits and barriers were then compared between conditions in order to test i) whether perceptions of benefits were lower when barriers were higher, and ii) whether perceptions of barriers were higher when benefits were lower. Intention to have the hypothetical test was also compared between conditions as an exploratory analysis of how the manipulation might affect actual screening behaviour. Materials and methods 2.1 Participants Recruitment was through Survey Sampling International (SSI, London, UK), a company which curates a panel of members of the UK general population who are offered small rewards (such as air miles) to participate in online surveys. Respondents to the initial email invitation from SSI were asked their age at the start of the survey and excluded if they were younger than 25 or older than 75 years (i.e. ineligible for cancer screening in the UK). A software algorithm applied stratified sampling to ensure that the sample resembled the general adult UK population in terms of age; one third of the sample were aged 25–39 years, one third were aged 40–54 years, and one third were aged 55–75 years. 2.2 Design and measures 2.2.1 Manipulations This study consisted of a 2 × 2 between-subjects experimental design. Participants were invited to complete one of four versions of a survey, randomly determined by a software algorithm. After confirming eligibility, they were shown a vignette consisting of information on the high incidence (33%) of a hypothetical illness that was amenable to screening (‘Rogan’s disease’), the rationale for screening, and the extremely high mortality risk in the absence of a screening test (only 100 in 1000 would survive). Participants were also given a description of a set of practicalities for a hypothetical hospital-based screening test, designed to resemble Computed Tomography (a screening test based on x-rays). This test can include an intravenous dye that carries a small risk of an adverse reaction, consisting of nausea and vomiting , the severity of which was manipulated as a screening test barrier ( “severe nausea and regular vomiting for 3 days” ; “mild nausea and occasional vomiting for 5 minutes ” ). The specific types of benefits and barriers were selected with the aim of being realistic, understandable, and plausible to participants, potentially influential on their intentions to have the test, as well as being fundamentally unrelated (as opposed to e.g. false positive and false negative results). Information on the degree of benefit was provided in terms of the mortality risk after undergoing a screening test. This was manipulated to equal either a large or a comparatively small reduction in mortality risk (900 per 1000 with Rogan’s disease who underwent screening would survive; 105 per 1000 would survive). contains an example of a complete vignette. Levels of barriers and benefits were designated “high” and “low” for convenience. 2.2.2 Comprehension checks Participants were asked three multiple choice questions with four response options to assess whether they correctly recalled the relevant information on mortality risk in the presence or absence of screening (e.g. “If 1000 people with Rogan’s disease are not screened and only treated once they feel unwell, how many people will be successfully treated and survive?” : 100 people; 105 people; 500 people; 900 people), and information on the severity of the adverse reaction. Responses were coded as either correct or incorrect based on the allocated condition. 2.2.3 Perceived benefits and barriers scales Primary outcomes were assessed by seven items measuring perceived benefits (e.g. “Having the screening test would increase my chances of surviving Rogan’s disease” ) and seven items assessing perceived barriers of the screening test (e.g. “the side-effects would be too uncomfortable” ). Response options consisted of a five-point Likert scale ranging from “strongly disagree” to “strongly agree” . Items were adapted from existing measures , , and demonstrated high internal consistency (Cronbach’s α: 0.89 and 0.96, respectively). Responses were scored from one to five with higher scores representing more positive perceptions of benefits and more negative perceptions of barriers, as applicable. Scores for individual items were summed to create two overall scale scores for each participant (each out of 35). 2.2.4 Perceived risk Participants were asked about perceived risk using an adaptation of a previously designed measure with six response options ( “If I didn ’t have the screening test, I think my chances of dying from Rogan’s disease would be…”: Almost zero; very small; moderate; large; very large; almost certain ). 2.2.5 Self-efficacy A five-item assessment of self-efficacy (e.g. “How confident are you that...You could find the time to have the screening test?” ), with four response options ranging from “very confident” to “not at all confident”, was adapted from a previous measure and this also had high internal consistency (Cronbach’s α = 0.93). Responses were scored from one to four; higher scores represented greater self-efficacy and were summed to create an overall scale score for each participant (out of 20). 2.2.6 Screening intention Intention to participate in screening was assessed using an ad-hoc item: “Imagine the NHS just sent you a letter, inviting you to be screened for Rogan’s disease. Would you attend the screening test?” ). Response options consisted of “yes” , “no” , and “don’t know” . 2.2.7 Demographics The survey ended with items assessing demographic characteristics, including gender, first language, and markers of socioeconomic status. A previously used method was used to derive an overall measure of socioeconomic status, based on responses to questions on home and vehicle ownership, and education : One point was counted for living in rented accommodation, no vehicle ownership, and no formal qualifications; higher scores indicated greater deprivation. Previous participation in the three cancer screening programmes that exist in the UK were also assessed (cervical, breast and colorectal; questions were tailored by age and gender so that ineligible participants did not see irrelevant questions). At the end of the survey, participants were able to request a summary copy of the study results. An example of the full survey is included in . 2.3 Piloting Prior to data collection, the manipulations for high and low benefits and barriers were tested in two waves, consisting of 32 and 26 participants, respectively. Each wave aimed to ensure that participants in the main study would discriminate between high and low levels of the two independent variables. In particular, it was assumed that participants would perceive high benefits from even very few lives saved through screening, which would have led to ceiling effects that reduced the perceived differences between high and low levels. Perceived benefits and barriers of several possible manipulations were assessed using two ad-hoc items and results were used to select levels that were likely to generate the largest possible differences while still being believable to participants. As an example, the first wave of piloting compared perceived benefits of 800 vs. 200 people surviving following screening (relative to 100 people surviving without screening). Notwithstanding the small sample size, scores differed in the predicted direction but only by a small amount. Consequently, the second wave of piloting amended the number of lives saved to 900 vs. 105, which was associated with a larger apparent difference in perceived benefits scores. The first wave also compared perceived barriers of an alternative to test side-effects (travel time to the hospital: 20 min vs. 2 h). Similar to perceived benefits, scores differed in the predicted direction but to a smaller degree than the side-effect manipulation. The second wave of piloting also assessed performance of the items adapted from previously used measures of perceived benefits and barriers, in order to gauge reliability prior to administering the survey to a larger sample. 2.4 Analysis Data were analysed using SPSS version 21 for Windows (IBM, Armonk, NY, USA). Participants answering one or more comprehension questions incorrectly were assumed to be insufficiently engaged with the survey and excluded from the analysis . Descriptive statistics were used to illustrate frequencies and proportions for sample characteristics. Parametric assumptions of data relating to perceived benefits and barriers (normally distributed residuals and homogeneity of variance) were tested and met. Hence, the primary analysis comprised two-way ANOVAs, one in which the dependent variable consisted of overall perceived benefit score and one in which the dependent variable was overall barrier score. In each ANOVA, independent variables consisted of benefit condition (high or low), barrier condition (high or low) and a benefit × barrier interaction term. Age-band (25–35, 40–54, and 55–75 years) was included to account for any effects of stratified sampling. A sensitivity analysis was carried out in which the age band variable was omitted; results did not differ meaningfully and so are not reported here. An exploratory analysis of screening intentions (proportions intending to be screened vs. not intending vs. did not know) compared responses across the four conditions using a Pearson’s χ 2 test. Standardised residuals (i.e. z-scores based on the difference between observed and expected frequencies) were used to test for differences in proportions between any given pair of conditions. 2.5 Required sample size and hypotheses The survey was ‘soft-launched’ and recruitment paused after 138 participants had completed the study in order to generate a preliminary estimate of mean square error for the dependent variables (necessary to calculate effect size). Since there is a direct conceptual link between perceptions of benefits and the actual magnitude of benefits, but not the actual magnitude of barriers, it was assumed that there would be a larger effect of manipulating benefits on perceived benefits than that of manipulating barriers. Likewise, manipulating barriers was expected to have a larger effect on perceived barriers than manipulating benefits. Calculations were based on a five-point difference for the effects of conceptually linked manipulations and a three-point difference for conceptually unrelated manipulations. Based on the initially observed mean square errors, it was estimated that a total of 204 participants would be required (51 participants per condition; 80% power, α = 0.05). Participants Recruitment was through Survey Sampling International (SSI, London, UK), a company which curates a panel of members of the UK general population who are offered small rewards (such as air miles) to participate in online surveys. Respondents to the initial email invitation from SSI were asked their age at the start of the survey and excluded if they were younger than 25 or older than 75 years (i.e. ineligible for cancer screening in the UK). A software algorithm applied stratified sampling to ensure that the sample resembled the general adult UK population in terms of age; one third of the sample were aged 25–39 years, one third were aged 40–54 years, and one third were aged 55–75 years. Design and measures 2.2.1 Manipulations This study consisted of a 2 × 2 between-subjects experimental design. Participants were invited to complete one of four versions of a survey, randomly determined by a software algorithm. After confirming eligibility, they were shown a vignette consisting of information on the high incidence (33%) of a hypothetical illness that was amenable to screening (‘Rogan’s disease’), the rationale for screening, and the extremely high mortality risk in the absence of a screening test (only 100 in 1000 would survive). Participants were also given a description of a set of practicalities for a hypothetical hospital-based screening test, designed to resemble Computed Tomography (a screening test based on x-rays). This test can include an intravenous dye that carries a small risk of an adverse reaction, consisting of nausea and vomiting , the severity of which was manipulated as a screening test barrier ( “severe nausea and regular vomiting for 3 days” ; “mild nausea and occasional vomiting for 5 minutes ” ). The specific types of benefits and barriers were selected with the aim of being realistic, understandable, and plausible to participants, potentially influential on their intentions to have the test, as well as being fundamentally unrelated (as opposed to e.g. false positive and false negative results). Information on the degree of benefit was provided in terms of the mortality risk after undergoing a screening test. This was manipulated to equal either a large or a comparatively small reduction in mortality risk (900 per 1000 with Rogan’s disease who underwent screening would survive; 105 per 1000 would survive). contains an example of a complete vignette. Levels of barriers and benefits were designated “high” and “low” for convenience. 2.2.2 Comprehension checks Participants were asked three multiple choice questions with four response options to assess whether they correctly recalled the relevant information on mortality risk in the presence or absence of screening (e.g. “If 1000 people with Rogan’s disease are not screened and only treated once they feel unwell, how many people will be successfully treated and survive?” : 100 people; 105 people; 500 people; 900 people), and information on the severity of the adverse reaction. Responses were coded as either correct or incorrect based on the allocated condition. 2.2.3 Perceived benefits and barriers scales Primary outcomes were assessed by seven items measuring perceived benefits (e.g. “Having the screening test would increase my chances of surviving Rogan’s disease” ) and seven items assessing perceived barriers of the screening test (e.g. “the side-effects would be too uncomfortable” ). Response options consisted of a five-point Likert scale ranging from “strongly disagree” to “strongly agree” . Items were adapted from existing measures , , and demonstrated high internal consistency (Cronbach’s α: 0.89 and 0.96, respectively). Responses were scored from one to five with higher scores representing more positive perceptions of benefits and more negative perceptions of barriers, as applicable. Scores for individual items were summed to create two overall scale scores for each participant (each out of 35). 2.2.4 Perceived risk Participants were asked about perceived risk using an adaptation of a previously designed measure with six response options ( “If I didn ’t have the screening test, I think my chances of dying from Rogan’s disease would be…”: Almost zero; very small; moderate; large; very large; almost certain ). 2.2.5 Self-efficacy A five-item assessment of self-efficacy (e.g. “How confident are you that...You could find the time to have the screening test?” ), with four response options ranging from “very confident” to “not at all confident”, was adapted from a previous measure and this also had high internal consistency (Cronbach’s α = 0.93). Responses were scored from one to four; higher scores represented greater self-efficacy and were summed to create an overall scale score for each participant (out of 20). 2.2.6 Screening intention Intention to participate in screening was assessed using an ad-hoc item: “Imagine the NHS just sent you a letter, inviting you to be screened for Rogan’s disease. Would you attend the screening test?” ). Response options consisted of “yes” , “no” , and “don’t know” . 2.2.7 Demographics The survey ended with items assessing demographic characteristics, including gender, first language, and markers of socioeconomic status. A previously used method was used to derive an overall measure of socioeconomic status, based on responses to questions on home and vehicle ownership, and education : One point was counted for living in rented accommodation, no vehicle ownership, and no formal qualifications; higher scores indicated greater deprivation. Previous participation in the three cancer screening programmes that exist in the UK were also assessed (cervical, breast and colorectal; questions were tailored by age and gender so that ineligible participants did not see irrelevant questions). At the end of the survey, participants were able to request a summary copy of the study results. An example of the full survey is included in . Manipulations This study consisted of a 2 × 2 between-subjects experimental design. Participants were invited to complete one of four versions of a survey, randomly determined by a software algorithm. After confirming eligibility, they were shown a vignette consisting of information on the high incidence (33%) of a hypothetical illness that was amenable to screening (‘Rogan’s disease’), the rationale for screening, and the extremely high mortality risk in the absence of a screening test (only 100 in 1000 would survive). Participants were also given a description of a set of practicalities for a hypothetical hospital-based screening test, designed to resemble Computed Tomography (a screening test based on x-rays). This test can include an intravenous dye that carries a small risk of an adverse reaction, consisting of nausea and vomiting , the severity of which was manipulated as a screening test barrier ( “severe nausea and regular vomiting for 3 days” ; “mild nausea and occasional vomiting for 5 minutes ” ). The specific types of benefits and barriers were selected with the aim of being realistic, understandable, and plausible to participants, potentially influential on their intentions to have the test, as well as being fundamentally unrelated (as opposed to e.g. false positive and false negative results). Information on the degree of benefit was provided in terms of the mortality risk after undergoing a screening test. This was manipulated to equal either a large or a comparatively small reduction in mortality risk (900 per 1000 with Rogan’s disease who underwent screening would survive; 105 per 1000 would survive). contains an example of a complete vignette. Levels of barriers and benefits were designated “high” and “low” for convenience. Comprehension checks Participants were asked three multiple choice questions with four response options to assess whether they correctly recalled the relevant information on mortality risk in the presence or absence of screening (e.g. “If 1000 people with Rogan’s disease are not screened and only treated once they feel unwell, how many people will be successfully treated and survive?” : 100 people; 105 people; 500 people; 900 people), and information on the severity of the adverse reaction. Responses were coded as either correct or incorrect based on the allocated condition. Perceived benefits and barriers scales Primary outcomes were assessed by seven items measuring perceived benefits (e.g. “Having the screening test would increase my chances of surviving Rogan’s disease” ) and seven items assessing perceived barriers of the screening test (e.g. “the side-effects would be too uncomfortable” ). Response options consisted of a five-point Likert scale ranging from “strongly disagree” to “strongly agree” . Items were adapted from existing measures , , and demonstrated high internal consistency (Cronbach’s α: 0.89 and 0.96, respectively). Responses were scored from one to five with higher scores representing more positive perceptions of benefits and more negative perceptions of barriers, as applicable. Scores for individual items were summed to create two overall scale scores for each participant (each out of 35). Perceived risk Participants were asked about perceived risk using an adaptation of a previously designed measure with six response options ( “If I didn ’t have the screening test, I think my chances of dying from Rogan’s disease would be…”: Almost zero; very small; moderate; large; very large; almost certain ). Self-efficacy A five-item assessment of self-efficacy (e.g. “How confident are you that...You could find the time to have the screening test?” ), with four response options ranging from “very confident” to “not at all confident”, was adapted from a previous measure and this also had high internal consistency (Cronbach’s α = 0.93). Responses were scored from one to four; higher scores represented greater self-efficacy and were summed to create an overall scale score for each participant (out of 20). Screening intention Intention to participate in screening was assessed using an ad-hoc item: “Imagine the NHS just sent you a letter, inviting you to be screened for Rogan’s disease. Would you attend the screening test?” ). Response options consisted of “yes” , “no” , and “don’t know” . Demographics The survey ended with items assessing demographic characteristics, including gender, first language, and markers of socioeconomic status. A previously used method was used to derive an overall measure of socioeconomic status, based on responses to questions on home and vehicle ownership, and education : One point was counted for living in rented accommodation, no vehicle ownership, and no formal qualifications; higher scores indicated greater deprivation. Previous participation in the three cancer screening programmes that exist in the UK were also assessed (cervical, breast and colorectal; questions were tailored by age and gender so that ineligible participants did not see irrelevant questions). At the end of the survey, participants were able to request a summary copy of the study results. An example of the full survey is included in . Piloting Prior to data collection, the manipulations for high and low benefits and barriers were tested in two waves, consisting of 32 and 26 participants, respectively. Each wave aimed to ensure that participants in the main study would discriminate between high and low levels of the two independent variables. In particular, it was assumed that participants would perceive high benefits from even very few lives saved through screening, which would have led to ceiling effects that reduced the perceived differences between high and low levels. Perceived benefits and barriers of several possible manipulations were assessed using two ad-hoc items and results were used to select levels that were likely to generate the largest possible differences while still being believable to participants. As an example, the first wave of piloting compared perceived benefits of 800 vs. 200 people surviving following screening (relative to 100 people surviving without screening). Notwithstanding the small sample size, scores differed in the predicted direction but only by a small amount. Consequently, the second wave of piloting amended the number of lives saved to 900 vs. 105, which was associated with a larger apparent difference in perceived benefits scores. The first wave also compared perceived barriers of an alternative to test side-effects (travel time to the hospital: 20 min vs. 2 h). Similar to perceived benefits, scores differed in the predicted direction but to a smaller degree than the side-effect manipulation. The second wave of piloting also assessed performance of the items adapted from previously used measures of perceived benefits and barriers, in order to gauge reliability prior to administering the survey to a larger sample. Analysis Data were analysed using SPSS version 21 for Windows (IBM, Armonk, NY, USA). Participants answering one or more comprehension questions incorrectly were assumed to be insufficiently engaged with the survey and excluded from the analysis . Descriptive statistics were used to illustrate frequencies and proportions for sample characteristics. Parametric assumptions of data relating to perceived benefits and barriers (normally distributed residuals and homogeneity of variance) were tested and met. Hence, the primary analysis comprised two-way ANOVAs, one in which the dependent variable consisted of overall perceived benefit score and one in which the dependent variable was overall barrier score. In each ANOVA, independent variables consisted of benefit condition (high or low), barrier condition (high or low) and a benefit × barrier interaction term. Age-band (25–35, 40–54, and 55–75 years) was included to account for any effects of stratified sampling. A sensitivity analysis was carried out in which the age band variable was omitted; results did not differ meaningfully and so are not reported here. An exploratory analysis of screening intentions (proportions intending to be screened vs. not intending vs. did not know) compared responses across the four conditions using a Pearson’s χ 2 test. Standardised residuals (i.e. z-scores based on the difference between observed and expected frequencies) were used to test for differences in proportions between any given pair of conditions. Required sample size and hypotheses The survey was ‘soft-launched’ and recruitment paused after 138 participants had completed the study in order to generate a preliminary estimate of mean square error for the dependent variables (necessary to calculate effect size). Since there is a direct conceptual link between perceptions of benefits and the actual magnitude of benefits, but not the actual magnitude of barriers, it was assumed that there would be a larger effect of manipulating benefits on perceived benefits than that of manipulating barriers. Likewise, manipulating barriers was expected to have a larger effect on perceived barriers than manipulating benefits. Calculations were based on a five-point difference for the effects of conceptually linked manipulations and a three-point difference for conceptually unrelated manipulations. Based on the initially observed mean square errors, it was estimated that a total of 204 participants would be required (51 participants per condition; 80% power, α = 0.05). Results 3.1 Sample characteristics The flow of participants through the study is presented in . After exclusions, 218 participants were included in the main analysis. Across the whole sample, participants had a mean age of 48.6 years (standard deviation: 13.6), 52.8% were female (n = 115), 86.7% (n = 189) were white British, and 96.3% (n = 210) spoke English as a first language. The majority of screening-eligible participants reported previous experience of testing, ranging from 73.6% for CRC screening to 87.9% for breast screening. All demographic and other background characteristics are presented in . 3.2 Effects of manipulating barriers (and manipulating benefits) on perceived benefits As expected, manipulating benefits had an effect on perceived benefits, and in the predicted direction (F(1,212) = 55.25, p < 0.0005), providing an indication that the manipulation was successful (Mean: 30.0, standard deviation: 4.0 vs. M: 25.6, SD: 5.1 for high vs. low benefits, respectively). The primary hypothesis that increasing barriers also reduced perceived benefits was also supported (F(1,212) = 6.81, p = 0.010; M: 28.5, SD: 4.8 vs. M: 27.5, SD: 5.3 for low vs. high barriers, respectively). As predicted, the effect of manipulating barriers was smaller (partial η 2 = 0.031) than that of manipulating benefits (partial η 2 = 0.207). In terms of the effects of the interaction term, there was only weak evidence against the null hypothesis (p = 0.137). 3.3 Effects of manipulating benefits (and manipulating barriers) on perceived barriers The manipulation of barriers was also successful; perceived barrier scores were higher when barriers were high (F(1,212) = 51.03, p < 0.0005; M: 19.6, SD: 7.6 vs. M:13.1, SD: 5.7). Consistent with the main hypothesis, increasing benefits also reduced perceived barriers (F(1,212) = 5.23, p = 0.023; M: 17.1, SD: 7.6 vs. M: 15.7, SD: 7.3 for low vs. high benefits, respectively). Again, this effect was smaller (partial η 2 = 0.024) than that of manipulated barriers (partial η 2 = 0.194). There was weak evidence against the null hypothesis with respect to the effects of the interaction term (p = 0.159). reports means and standard deviations for perceived benefit and barrier scores for each of the four conditions. 3.4 Screening intentions There was strong evidence against the null hypothesis of equal proportions of intention categories across conditions (χ 2 (6) = 43.26, p < 0.0005). Follow-up analyses of standardised residuals provided evidence that a greater proportion of participants did not intend to be screened in the low benefit-high barrier condition, (34.8% vs. 0.0%–7.1%; z = − 2.5; p < 0.05), and a smaller proportion did intend to be screened (37.0% vs. 73.2%–80.8% z = 5.1; p < 0.01). In addition, fewer participants did not intend to be screened in the high benefit-low barrier condition (0.0% vs. 4.7%–34.8%; z = − 2.3; p < 0.05). Proportions of participants responding with “don ’ t know” were comparable between all conditions (19.2%–21.9%; ). Sample characteristics The flow of participants through the study is presented in . After exclusions, 218 participants were included in the main analysis. Across the whole sample, participants had a mean age of 48.6 years (standard deviation: 13.6), 52.8% were female (n = 115), 86.7% (n = 189) were white British, and 96.3% (n = 210) spoke English as a first language. The majority of screening-eligible participants reported previous experience of testing, ranging from 73.6% for CRC screening to 87.9% for breast screening. All demographic and other background characteristics are presented in . Effects of manipulating barriers (and manipulating benefits) on perceived benefits As expected, manipulating benefits had an effect on perceived benefits, and in the predicted direction (F(1,212) = 55.25, p < 0.0005), providing an indication that the manipulation was successful (Mean: 30.0, standard deviation: 4.0 vs. M: 25.6, SD: 5.1 for high vs. low benefits, respectively). The primary hypothesis that increasing barriers also reduced perceived benefits was also supported (F(1,212) = 6.81, p = 0.010; M: 28.5, SD: 4.8 vs. M: 27.5, SD: 5.3 for low vs. high barriers, respectively). As predicted, the effect of manipulating barriers was smaller (partial η 2 = 0.031) than that of manipulating benefits (partial η 2 = 0.207). In terms of the effects of the interaction term, there was only weak evidence against the null hypothesis (p = 0.137). Effects of manipulating benefits (and manipulating barriers) on perceived barriers The manipulation of barriers was also successful; perceived barrier scores were higher when barriers were high (F(1,212) = 51.03, p < 0.0005; M: 19.6, SD: 7.6 vs. M:13.1, SD: 5.7). Consistent with the main hypothesis, increasing benefits also reduced perceived barriers (F(1,212) = 5.23, p = 0.023; M: 17.1, SD: 7.6 vs. M: 15.7, SD: 7.3 for low vs. high benefits, respectively). Again, this effect was smaller (partial η 2 = 0.024) than that of manipulated barriers (partial η 2 = 0.194). There was weak evidence against the null hypothesis with respect to the effects of the interaction term (p = 0.159). reports means and standard deviations for perceived benefit and barrier scores for each of the four conditions. Screening intentions There was strong evidence against the null hypothesis of equal proportions of intention categories across conditions (χ 2 (6) = 43.26, p < 0.0005). Follow-up analyses of standardised residuals provided evidence that a greater proportion of participants did not intend to be screened in the low benefit-high barrier condition, (34.8% vs. 0.0%–7.1%; z = − 2.5; p < 0.05), and a smaller proportion did intend to be screened (37.0% vs. 73.2%–80.8% z = 5.1; p < 0.01). In addition, fewer participants did not intend to be screened in the high benefit-low barrier condition (0.0% vs. 4.7%–34.8%; z = − 2.3; p < 0.05). Proportions of participants responding with “don ’ t know” were comparable between all conditions (19.2%–21.9%; ). Discussion and conclusion 4.1 Discussion These findings provide evidence that screening attributes are not appraised independently but jointly, and manipulating one affects evaluations of the other. Our results build on cross-sectional studies that have demonstrated a negative correlation between benefits and barriers of cancer screening tests , , , by showing that to some extent these correlations are likely to be due to a degree of interrelatedness between the two characteristics. This study also found that the large majority of participants stated that they would have the test in three of the four conditions. However, there was a marked difference in the worst condition (low benefits, high barriers), with a greater proportion stating that they would not have the test. This exploratory analysis offers some indication that barriers and benefits might interact in a way that influences screening uptake. Further research would also be necessary to understand how intentions (and ultimately actual uptake) relate to the observed interaction. Although there is evidence that barriers and benefits are good predictors of behaviour when assessed individually , , there has been extensive criticism of the assumption made by the Health Belief Model that they have simple additive effects , . In this respect, the present findings support researchers’ recommendations for alternative approaches that examine moderation among variables . Our results further suggest that a degree of caution is warranted regarding research that aims to identify specific barriers to cancer screening without simultaneously addressing perceptions of benefits : The issues that participants raise as important barriers to screening may be proxies for being unconvinced about the benefits , . One further implication of these results is that screening tests with greater barriers might also elicit less positive perceptions of benefits. For example, flexible sigmoidoscopy screening for colorectal cancer (CRC) involves an invasive, internal examination and an inconvenient bowel preparation, which might diminish the effectiveness of interventions to improve uptake that aim solely to communicate its efficacy in terms of reducing CRC incidence and mortality. Conversely, these findings suggest that there may be potential to improve perceptions of screening test benefits by reducing barriers (and vice versa). As a practical example, as the Bowel Cancer Screening Programme in England replaces one method of stool testing with a less inconvenient alternative , this reduced inconvenience may lead to more favourable appraisals of the test’s capability to reduce mortality. This study has limitations. The context of screening for a hypothetical illness allowed benefits and barriers to be manipulated freely, to the point that participants could discriminate between the two levels of each independent variable. However, the implications for practice with respect to real screening contexts are undetermined. It is notable that pilot work found similar benefit scores for even large differences in mortality reduction. The small observed effects may not apply to real screening contexts in which differences between tests are subtler. In addition, participants were excluded if they answered one of the three ‘comprehension check’ questions incorrectly, despite assistance offered to help them respond correctly. This approach aimed to exclude participants from the analysis if they had not read the relevant information and so were not sufficiently engaged with the study. However, it might have also resulted in a sample that was more numerate or literate than the general population. The proportion of exclusions was also greater in the low benefit-high barrier condition. This study tested whether barriers affected perceptions of benefits and vice versa. However, it did not aim to test whether any particular psychological mechanism underpinned this relationship. The findings are consistent with the presence of an affect heuristic , which has been used to explain similar effects in appraisals of other technologies , but it is also consistent with various alternative explanations such as directing attention towards particular kinds of information , a halo effect, and attempts to avoid cognitive dissonance . Further studies would be necessary to explore these possibilities. For example, subsequent studies could use a similar design but include measures of emotion in order to test for affective explanations. In the first instance, it would be important to test whether the effects of absolute barriers and benefits on unrelated outcomes were mediated via perceived barriers and benefits, respectively. Further research that uses these approaches would make a greater contribution to psychological theory. Other areas for further research relate to the specific manipulations used: The set of benefits and barriers manipulated in the present study were selected following pilot work that aimed to maximise the chances of observing the hypothesised effect while still being believable to participants. This effect may not necessarily have been apparent with other benefits or barriers (e.g. one of our original tested barriers of travel distance to the hospital, which appeared to elicit smaller differences in perceived barriers between longer and shorter journey times than in the case of the side-effects attribute). However, characteristics of real screening tests are complex and multifactorial. Benefits can be medical and psychological; barriers can also be psychological as well as practical . It may be particularly valuable to policy makers to determine the effects of manipulating specific characteristics of screening tests. For example, the risk of overdiagnosis in the case of breast cancer screening is the subject of intense debate since it results in unnecessary treatment and the psychological harms of a cancer diagnosis. Overdiagnosis may be perceived more negatively by screening invitees than the practical barriers described in this study. Furthermore, it is often unfeasible to change real characteristics of screening tests but it is much easier to alter information in screening invitations. For example, different degrees of emphasis can be placed on information about barriers or benefits (e.g. by giving them greater prominence within an invitation leaflet, or by reiterating them in a leaflet summary). Manipulating these characteristics may increase or decrease some of the effects observed here. Moreover, the results of this study suggest that manipulating both attributes would have more than just an additive effect. 4.2 Conclusion We found evidence that manipulating barriers of a screening test influenced perceived benefits and that manipulating benefits influenced perceived barriers. Future research should test the possible underlying psychological mechanisms and investigate the extent to which these findings generalise to real screening contexts. This would inform policy makers in their efforts to improve the balance of screening barriers and benefits in order to increase uptake. Discussion These findings provide evidence that screening attributes are not appraised independently but jointly, and manipulating one affects evaluations of the other. Our results build on cross-sectional studies that have demonstrated a negative correlation between benefits and barriers of cancer screening tests , , , by showing that to some extent these correlations are likely to be due to a degree of interrelatedness between the two characteristics. This study also found that the large majority of participants stated that they would have the test in three of the four conditions. However, there was a marked difference in the worst condition (low benefits, high barriers), with a greater proportion stating that they would not have the test. This exploratory analysis offers some indication that barriers and benefits might interact in a way that influences screening uptake. Further research would also be necessary to understand how intentions (and ultimately actual uptake) relate to the observed interaction. Although there is evidence that barriers and benefits are good predictors of behaviour when assessed individually , , there has been extensive criticism of the assumption made by the Health Belief Model that they have simple additive effects , . In this respect, the present findings support researchers’ recommendations for alternative approaches that examine moderation among variables . Our results further suggest that a degree of caution is warranted regarding research that aims to identify specific barriers to cancer screening without simultaneously addressing perceptions of benefits : The issues that participants raise as important barriers to screening may be proxies for being unconvinced about the benefits , . One further implication of these results is that screening tests with greater barriers might also elicit less positive perceptions of benefits. For example, flexible sigmoidoscopy screening for colorectal cancer (CRC) involves an invasive, internal examination and an inconvenient bowel preparation, which might diminish the effectiveness of interventions to improve uptake that aim solely to communicate its efficacy in terms of reducing CRC incidence and mortality. Conversely, these findings suggest that there may be potential to improve perceptions of screening test benefits by reducing barriers (and vice versa). As a practical example, as the Bowel Cancer Screening Programme in England replaces one method of stool testing with a less inconvenient alternative , this reduced inconvenience may lead to more favourable appraisals of the test’s capability to reduce mortality. This study has limitations. The context of screening for a hypothetical illness allowed benefits and barriers to be manipulated freely, to the point that participants could discriminate between the two levels of each independent variable. However, the implications for practice with respect to real screening contexts are undetermined. It is notable that pilot work found similar benefit scores for even large differences in mortality reduction. The small observed effects may not apply to real screening contexts in which differences between tests are subtler. In addition, participants were excluded if they answered one of the three ‘comprehension check’ questions incorrectly, despite assistance offered to help them respond correctly. This approach aimed to exclude participants from the analysis if they had not read the relevant information and so were not sufficiently engaged with the study. However, it might have also resulted in a sample that was more numerate or literate than the general population. The proportion of exclusions was also greater in the low benefit-high barrier condition. This study tested whether barriers affected perceptions of benefits and vice versa. However, it did not aim to test whether any particular psychological mechanism underpinned this relationship. The findings are consistent with the presence of an affect heuristic , which has been used to explain similar effects in appraisals of other technologies , but it is also consistent with various alternative explanations such as directing attention towards particular kinds of information , a halo effect, and attempts to avoid cognitive dissonance . Further studies would be necessary to explore these possibilities. For example, subsequent studies could use a similar design but include measures of emotion in order to test for affective explanations. In the first instance, it would be important to test whether the effects of absolute barriers and benefits on unrelated outcomes were mediated via perceived barriers and benefits, respectively. Further research that uses these approaches would make a greater contribution to psychological theory. Other areas for further research relate to the specific manipulations used: The set of benefits and barriers manipulated in the present study were selected following pilot work that aimed to maximise the chances of observing the hypothesised effect while still being believable to participants. This effect may not necessarily have been apparent with other benefits or barriers (e.g. one of our original tested barriers of travel distance to the hospital, which appeared to elicit smaller differences in perceived barriers between longer and shorter journey times than in the case of the side-effects attribute). However, characteristics of real screening tests are complex and multifactorial. Benefits can be medical and psychological; barriers can also be psychological as well as practical . It may be particularly valuable to policy makers to determine the effects of manipulating specific characteristics of screening tests. For example, the risk of overdiagnosis in the case of breast cancer screening is the subject of intense debate since it results in unnecessary treatment and the psychological harms of a cancer diagnosis. Overdiagnosis may be perceived more negatively by screening invitees than the practical barriers described in this study. Furthermore, it is often unfeasible to change real characteristics of screening tests but it is much easier to alter information in screening invitations. For example, different degrees of emphasis can be placed on information about barriers or benefits (e.g. by giving them greater prominence within an invitation leaflet, or by reiterating them in a leaflet summary). Manipulating these characteristics may increase or decrease some of the effects observed here. Moreover, the results of this study suggest that manipulating both attributes would have more than just an additive effect. Conclusion We found evidence that manipulating barriers of a screening test influenced perceived benefits and that manipulating benefits influenced perceived barriers. Future research should test the possible underlying psychological mechanisms and investigate the extent to which these findings generalise to real screening contexts. This would inform policy makers in their efforts to improve the balance of screening barriers and benefits in order to increase uptake. None declared. Ethical approval was granted by the UCL Research Ethics Committee (5791/001). The current study was supported by a programme grant from Cancer Research UK awarded to Prof Jane Wardle (C1418/A14134). Cancer Research UK was not involved in the design of this study; the collection, analysis, or interpretation of the results; in the writing of the manuscript; or in the decision to submit for publication. I confirm all patient/personal identifiers have been removed or disguised so the patient/person(s) described are not identifiable and cannot be identified through the details of the story. AG, CVW, and JW conceived the study. AG, EN, and CVW participated in the design. AG and EN participated in the acquisition of the data. AG, EN, and CVW participated in analysis of the data. All authors participated in interpretation of data, drafting and critical revision of the manuscript, and approved the final version.
Cordycepin affects
b79877df-968f-4690-85c1-026d69e72529
11700466
Biochemistry[mh]
Dental caries is a common disease of the oral cavity. It is caused by bacterial infection that can lead to gradual deterioration of the hard tissue of the teeth and thus requires prompt treatment . Common complications of caries include pulpitis, tooth loss, jaw inflammation, and even septicemia caused by unrestricted bacterial growth . According to the 2017 Global Burden of Disease study, 2.3 billion individuals worldwide have caries in their permanent dentition and over 530 million children have caries in deciduous teeth . It can be seen that the prevention and control of caries is both an important and urgent public health issue throughout the world. Caries is a multifactorial disease characterised by demineralisation of tooth structure caused by organic acids from biofilms . Biofilms are architectural communities of diverse microorganisms that are firmly attached to surfaces and embedded in a three-dimensional extracellular matrix composed of polymeric substances such as extracellular polysaccharides, proteins and nucleic acids . The extracellular matrix produced by specific microorganisms promotes microbial adhesion and cohesion and is essential for the expression of virulence of bacterial pathogens living in biofilms . S. mutans is a major matrix producer and can rapidly modulate cariogenic biofilm formation when sucrose is present in the diet . And it can metabolize carbon sources to produce acidic . When the pH of the local micro-ecological environment of the tooth surface drops to 5.5, demineralization of the enamel can occur. However, S. mutans can tolerate acidic environments and still survive at pHs below 5.5, contributing to dentin demineralization and the formation of dental caries . Bacteria within biofilms are more resistant to drugs and host immune defense than planktonic bacteria, and biofilms can also supply bacteria with abundant nutrients that promote their growth and reproduction, thus aggravating caries . Therefore, regulation of the cariogenic effects of biofilm may be a method to prevent caries. The isolation and screening of active ingredients from natural substances and studying their therapeutic mechanisms are current research hotspots in pharmacology. Cordycepin (formula: C 10 H 13 N 5 O 3 ), also known as 3’-deoxyadenosine , is a traditional monomeric Chinese medicine that is isolated from cordyceps fungi and has anti-tumor , anti-viral , anti-inflammatory , and immunomodulatory properties. Cordycepin has been demonstrated to exhibit broad-spectrum antibacterial activity against 11 species of intestinal bacteria, including Escherichia coli , and eight species of lactobacilli . Other studies have indicated that cordycepin can also competitively inhibit the catalytic activity of bacterial adenosine kinase, which in turn inhibits adenosine phosphorylation, leading to the death of Mycobacterium tuberculosis . Cordycepin also affects the structure and function of nucleic acids by disrupting bacterial cell membranes and binding to bacterial genomic DNA, ultimately leading to the death of Escherichia coli and Bacillus subtilis . Cordycepin has been shown to play a vital role in inhibiting and eradicating Candida albicans biofilms . However, there is limited information on whether cordycepin inhibits S. mutans biofilms. In addition, the development of antimicrobial treatments suitable for clinical application is also an important part of research. Toothpaste and mouthwash containing natural/herbal agents are widely used for treating oral diseases. Mouthwash is especially popular as it is convenient to use. Mouthwash has been shown to control biofilm formation to some extent by interfering with the metabolism of oral microorganisms . Metabolomics is a technology that mainly investigates spatiotemporal variations in landscapes of low molecular-weight metabolites resulting from different stimuli and treatments, thus providing insight into the metabolic regulation of biological systems . In our previous study, it was found that the treatment of biofilm samples with 1.5% arginine twice daily (at 8 h and 20 h time points, 10 min per time) was better able to simulate the environment in which someone might use mouthwash twice per day. In addition, 1.5% arginine in this treatment mode can significantly reduce the biomass of S. mutans biofilm . To date, there have been no investigations into the effect of cordycepin on S. mutans biofilm. Therefore, this study referred to the treatment mode of 1.5% arginine to evaluate the anti-biofilm effects of cordycepin and determine the effects of cordycepin from the perspective of metabolomics. Bacterial species and growth conditions S. mutans UA159 (ATCC 700610) was cultured overnight in brain heart infusion (BHI) broth (HuanKai Microbial, Guangdong, China) in an incubator at 37 °C with 5% CO 2 . Determination of the minimum inhibitory concentration (MIC) of cordycepin The minimum inhibitory concentration (MIC) of cordycepin against S. mutans was determined using a microdilution method, as described by Cai et al. . Cordycepin (BENCAOYIKANG, YK220110, China) was dissolved in sterile water and double-diluted to varying concentrations (8,192, 4,096, 2,048, 1,024, 512, 256, 128, 64, 32, 16, 8, 4, and 2 µg/mL) with sterile water before use. 100 µL of the bacterial suspension diluted with two-fold concentration of BHI and 100 µL cordycepin were inoculated into each well of a 96-well round-bottomed plate (Corning, 3799, USA), resulting in a final bacterial count of 10 6 colony forming units (CFU) mL − 1 . The final concentrations of cordycepin were 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1,024, 2,048, and 4,096 µg/mL. The equivalent volumes of sterile water were used as control. The bacterial samples were incubated at 37 °C for 24 h and absorbances at 600 nm were then measured using a spectrophotometer (Thermo Fisher Scientific 1530, Waltham, MA, USA). Each experiment was performed with triplicate samples at each time point. The results correspond to three experiments independently. Assessment of biofilm biomass Crystal violet assays were used to evaluate the biomass of the biofilm . BHI with 1% sucrose (1% BHIS) adjusted the bacteria to 10 7 CFU mL − 1 and these bacterial suspensions were inoculated in 96-well flat-bottomed plates (Corning, 3599) at 200 µl per well and cultured at 37 °C under 5% CO 2 to let the bacteria build a biofilm. At both the 8- and 20-h time points, the biofilm was treated with 200 µL of different concentrations of cordycepin (1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1,024, 2,048 and 4,096 µg/mL) (10 min per treatment). The medium was replaced with fresh 1% BHIS after each treatment. After the second treatment, the biofilm was allowed to grow until the 24-h time point. The biofilms were stained for 15 min with 0.1% crystal violet (100 µL/well) and washed twice with sterile water. The excess crystal violet solution was removed, the wells were washed twice with sterile water, and the bound crystal violet was dissolved by adding 100 µL of 100% ethanol. After transferring a 100 µL volume of each sample into a new 96-well plate, the absorbances of the samples were read at 570 nm with a spectrophotometer (Thermo Fisher Scientific 1530). Sterile water and 1.5% arginine were used as the negative and positive control, respectively. Each experiment was performed with triplicate samples at each time point. The results correspond to three experiments independently. Cytotoxicity assays Human oral keratinocytes (HOK) (GuangZhou Jennio Biotech company, China) were used to evaluate the cytotoxicity of cordycepin in vitro using CCK-8 assays . HOK cells were cultured at 37 °C and 5% CO 2 in Minimum Essential Medium (Gibco, 1095500BT, USA) containing 10% fetal bovine serum (Lonsera, S711-001 S, Uruguay). The HOK cells (5000 cell/100 µL/well) were seeded in 96-well plates and grown for 24 h, after which they were treated with medium with or without cordycepin (128 µg/mL) for 24 h. After incubation, 10 µL of CCK-8 solution (Beyotime, C0038, China) was added to each well and the plate was incubated in a CO 2 incubator for 3 h. The absorbances of the samples were measured at 450 nm against a blank which contained medium only. Cell viability = (OD of sample - OD of blank)/(OD of control -OD of blank) × 100%. Each experiment was performed with triplicate samples at each time point. The results correspond to three experiments independently. MTT metabolic assay The metabolic activity of the bacteria was determined by 3-(4,5-Dimethyl-2-thiazolyl)-2,5-diphenyl-2 H-tetrazolium bromide (MTT) (Sigma-Aldrich, M2128, USA) assay, as previously described . As described in 2.3 , the 24-h biofilm was treated with sterile water or 128 µg/mL of cordycepin for 10 min at the 8- or 20-h time points. 50 µL of 0.5 mg/mL MTT solution in PBS were added to the biofilms in 96-well plates. After 1-h incubation at 37 °C, the wells were washed twice with 150 µL PBS, after which 150 µL of DMSO (Beyotime, ST038, China) was added to the wells. After 10 min on an orbital shaker, the absorbances of the samples were measured at 570 nm with a spectrophotometer. Metabolic activity of biofilms = (OD of sample - OD of blank)/(OD of control -OD of blank) × 100%. Each experiment was performed with triplicate samples at each time point. The results correspond to three experiments independently. Analysis of the live bacterial rate and the volume of biofilm by confocal laser-scanning microscopy Cell samples (2,000 µL, 10 7 CFU mL − 1 in fresh 1% BHIS) were added to confocal culture dishes (NEST, 801002, USA). As described in 2.3, the 24-h biofilm was treated with sterile water or 128 µg/mL of cordycepin for 10 min at the 8- and 20-h time points. After three washes with 0.9% saline, bacterial cells were stained using a LIVE/DEADTM BacLight™ Bacterial Viability Kit (Thermo Fisher, L13152). After three washes with 0.9% saline to remove dye, the bacteria were examined by confocal laser scanning microscopy (Olympus, FV-1200MPE SHARE, Japan), with live bacteria staining green (excitation channel was set to 488 nm), and dead bacteria staining red (excitation channel was set to 543 nm). Triplicate samples were used, and three randomly selected fields in each sample were examined. This experiment was repeated three times independently. The rate of live bacteria and the biofilm volume were calculated using IMARIS software. Metabolomics analysis Metabolite extraction Each well of a flat-bottomed 24-well plate was inoculated with 1,000 µL of cell sample (10 7 CFU mL − 1 in fresh 1% BHIS). As described in 2.3, the biofilm was treated with sterile water (Control-24 h) or 128 µg/mL of cordycepin (Exp-24 h) for 10 min at the 8- and 20-h time points. After the second treatment, the biofilm culture was continued for another 4 h before collection. Biofilm samples were also collected at the 20-h point following the second cordycepin treatment (Exp-20 h). Metabolite extraction, detection, and analysis were performed at Wuhan Metware Biotechnology (Wuhan, China), where the samples had been sent on dry ice. One hundred microliter samples of biofilm were mixed with 300 µL of methanol for 1 min. The samples were then homogenized using a TissueLyser (Jingxin Company, Shanghai) for 5 min at 50 Hz. After being frozen for 2 h at − 20 °C, the samples were centrifuged at 4 °C for 15 min at 25,000 g. This was followed by Liquid chromatograph mass spectrometer (LC-MS) analysis. Sextuplicate samples were used. Ten-microliter aliquots of supernatant were removed from each sample and incorporated into quality control (QC) samples to monitor the repeatability of the analysis process. Metabolomics detection A 1290 Infinity ultraperformance liquid chromatography (UPLC) system (Agilent, USA) with a Waters Acquity UPLC HSS T3 C18 column (100 × 2.1 mm, 1.8 μm) was used for chromatographic separation. The column was maintained at 40 °C with a 2-µL injection volume for each sample and 0.4 mL/min flow rate. The mobile phases were (A) water and (B) acetonitrile both containing 0.1% formic acid (Thermo Fisher). The column was eluted with 5% mobile phase B and 95% mobile phase A, which was then followed by a linear gradient to 90% mobile phase B and 10% mobile phase A over 11 min, which was then held for 1 min before being brought back down to 5% mobile phase B and 95% mobile phase A within 0.1 min and then held for 1.9 min. A 6550 quadrupole time-of-flight mass spectrometer (Agilent) was operated in positive (ESI+) and negative (ESI-) electrospray ion modes. The ion source voltages for ESI + and ESI- modes were 2.5 kV and 1.5 kV, respectively. In ESI + and ESI- mode, both the gas and sheath temperature were set at 325 ℃ and the gas and sheath flows were set at 8 L/min and 11 L/min, respectively. The nebulizer and fragmentation voltages were 40 V and 135 V, respectively. Data processing and analysis methods ProteoWizard was used to transform the original LC-MS data file into the mzML format. Peak extraction, peak alignment, and retention time correction were performed using XCMS software. The “SVR” technique was applied for adjustment of the peak area. Peaks with detection rates below 50% were eliminated from each set of samples. Principal component analysis (PCA) was performed by using the R “prcomp” function. PCA was used for the initial analysis of the separation between the samples of each group and determining whether there were differences within the sample groups, while the QC samples were monitored to determine whether the instrument was stable. Orthogonal partial least squares discriminant analysis (OPLS-DA) was performed using the “MetaboAnalyst” package in R. One of the most common applications of OPLS-DA, a form of supervised multivariate statistical analysis, is the identification of metabolites that are significantly different across sample groups. Moreover, 200-permutation tests were conducted to assess OPLS-DA model overfitting. The prediction parameters R 2 X, R 2 Y, and Q 2 were calculated to evaluate the predictive ability and goodness-of-fit of the OPLS-DA model. Selected differential metabolites The variable importance in projection (VIP) based on the OPLS-DA model is a parameter that shows the importance of a variable in a model. Usually, a metabolite with VIP > 1 is considered to play an important role in distinguishing groups. The screening requirements for differential metabolites between the two groups included a fold-change (FC) value of at least 2.0 or ≤ 0.5, VIP of more than 1, and corrected P- values less than 0.05. Differential metabolites were selected for multi-group analysis if they matched the criteria of VIP > 1 and corrected P -value < 0.05 (one-way ANOVA). Enrichment analysis and KEGG annotation The identified differential metabolites were annotated using the KEGG compound database ( www.kegg.jp ), after which they were mapped using the KEGG Pathway database. Quantitative real-time PCR Nine genes ( guaA , guaB , hprT , purA , purB , apt , deoD , punA , and add ) found to be associated with the differential metabolites were selected for quantitative real-time PCR verification. The grouping and biofilm preparation time points used for this experiment were consistent with those used for the metabolomics analysis. Total RNA was extracted from three groups of biofilm samples using an miRNeasy Mini Kit (Qiagen, Germany). cDNA synthesis and quantitative real-time PCR analysis were performed using the PrimeScript™ RT reagent Kit (Takara Cat RR037A) and TB Green ® Premix Ex Taq ™ II kit (Takara Cat RR820A), respectively. The amplification procedures were as follows: 95 °C for 30 s; 45 cycles of 95 °C for 5 s, and 60 °C for 30 s. The sequences of the primers designed by Primer 3.0 software are shown in Table , and 16 S rRNA was used as the endogenous control. The mRNA expression levels of related genes in the three groups were analyzed by the threshold cycle method (2 −ΔΔCT ). Each experiment was performed with triplicate samples at each time point. The results correspond to three experiments independently. Statistical analysis Statistical analysis was performed using IBM SPSS 25.0 (IBM, Armonk, NY, USA). The data were assessed to determine whether they were normally distributed. Normally distributed data were analyzed using t-tests, while Mann-Whitney U tests were used to compare non-normally distributed data. Differences in the expression of nine genes between the Control-24 h, Exp-20 h, and EXP-24 h groups were analyzed using one-way ANOVA. If the variance was homogeneous, an LSD multiple comparison test was performed, while for non-homogenous variances, Tamhane’s T2 multiple comparison test was used. For metabolome analysis, see 2.7.3 and 2.7.4 above. The level of statistical significance was set at P < 0.05. S. mutans UA159 (ATCC 700610) was cultured overnight in brain heart infusion (BHI) broth (HuanKai Microbial, Guangdong, China) in an incubator at 37 °C with 5% CO 2 . The minimum inhibitory concentration (MIC) of cordycepin against S. mutans was determined using a microdilution method, as described by Cai et al. . Cordycepin (BENCAOYIKANG, YK220110, China) was dissolved in sterile water and double-diluted to varying concentrations (8,192, 4,096, 2,048, 1,024, 512, 256, 128, 64, 32, 16, 8, 4, and 2 µg/mL) with sterile water before use. 100 µL of the bacterial suspension diluted with two-fold concentration of BHI and 100 µL cordycepin were inoculated into each well of a 96-well round-bottomed plate (Corning, 3799, USA), resulting in a final bacterial count of 10 6 colony forming units (CFU) mL − 1 . The final concentrations of cordycepin were 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1,024, 2,048, and 4,096 µg/mL. The equivalent volumes of sterile water were used as control. The bacterial samples were incubated at 37 °C for 24 h and absorbances at 600 nm were then measured using a spectrophotometer (Thermo Fisher Scientific 1530, Waltham, MA, USA). Each experiment was performed with triplicate samples at each time point. The results correspond to three experiments independently. Crystal violet assays were used to evaluate the biomass of the biofilm . BHI with 1% sucrose (1% BHIS) adjusted the bacteria to 10 7 CFU mL − 1 and these bacterial suspensions were inoculated in 96-well flat-bottomed plates (Corning, 3599) at 200 µl per well and cultured at 37 °C under 5% CO 2 to let the bacteria build a biofilm. At both the 8- and 20-h time points, the biofilm was treated with 200 µL of different concentrations of cordycepin (1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1,024, 2,048 and 4,096 µg/mL) (10 min per treatment). The medium was replaced with fresh 1% BHIS after each treatment. After the second treatment, the biofilm was allowed to grow until the 24-h time point. The biofilms were stained for 15 min with 0.1% crystal violet (100 µL/well) and washed twice with sterile water. The excess crystal violet solution was removed, the wells were washed twice with sterile water, and the bound crystal violet was dissolved by adding 100 µL of 100% ethanol. After transferring a 100 µL volume of each sample into a new 96-well plate, the absorbances of the samples were read at 570 nm with a spectrophotometer (Thermo Fisher Scientific 1530). Sterile water and 1.5% arginine were used as the negative and positive control, respectively. Each experiment was performed with triplicate samples at each time point. The results correspond to three experiments independently. Human oral keratinocytes (HOK) (GuangZhou Jennio Biotech company, China) were used to evaluate the cytotoxicity of cordycepin in vitro using CCK-8 assays . HOK cells were cultured at 37 °C and 5% CO 2 in Minimum Essential Medium (Gibco, 1095500BT, USA) containing 10% fetal bovine serum (Lonsera, S711-001 S, Uruguay). The HOK cells (5000 cell/100 µL/well) were seeded in 96-well plates and grown for 24 h, after which they were treated with medium with or without cordycepin (128 µg/mL) for 24 h. After incubation, 10 µL of CCK-8 solution (Beyotime, C0038, China) was added to each well and the plate was incubated in a CO 2 incubator for 3 h. The absorbances of the samples were measured at 450 nm against a blank which contained medium only. Cell viability = (OD of sample - OD of blank)/(OD of control -OD of blank) × 100%. Each experiment was performed with triplicate samples at each time point. The results correspond to three experiments independently. The metabolic activity of the bacteria was determined by 3-(4,5-Dimethyl-2-thiazolyl)-2,5-diphenyl-2 H-tetrazolium bromide (MTT) (Sigma-Aldrich, M2128, USA) assay, as previously described . As described in 2.3 , the 24-h biofilm was treated with sterile water or 128 µg/mL of cordycepin for 10 min at the 8- or 20-h time points. 50 µL of 0.5 mg/mL MTT solution in PBS were added to the biofilms in 96-well plates. After 1-h incubation at 37 °C, the wells were washed twice with 150 µL PBS, after which 150 µL of DMSO (Beyotime, ST038, China) was added to the wells. After 10 min on an orbital shaker, the absorbances of the samples were measured at 570 nm with a spectrophotometer. Metabolic activity of biofilms = (OD of sample - OD of blank)/(OD of control -OD of blank) × 100%. Each experiment was performed with triplicate samples at each time point. The results correspond to three experiments independently. Cell samples (2,000 µL, 10 7 CFU mL − 1 in fresh 1% BHIS) were added to confocal culture dishes (NEST, 801002, USA). As described in 2.3, the 24-h biofilm was treated with sterile water or 128 µg/mL of cordycepin for 10 min at the 8- and 20-h time points. After three washes with 0.9% saline, bacterial cells were stained using a LIVE/DEADTM BacLight™ Bacterial Viability Kit (Thermo Fisher, L13152). After three washes with 0.9% saline to remove dye, the bacteria were examined by confocal laser scanning microscopy (Olympus, FV-1200MPE SHARE, Japan), with live bacteria staining green (excitation channel was set to 488 nm), and dead bacteria staining red (excitation channel was set to 543 nm). Triplicate samples were used, and three randomly selected fields in each sample were examined. This experiment was repeated three times independently. The rate of live bacteria and the biofilm volume were calculated using IMARIS software. Metabolite extraction Each well of a flat-bottomed 24-well plate was inoculated with 1,000 µL of cell sample (10 7 CFU mL − 1 in fresh 1% BHIS). As described in 2.3, the biofilm was treated with sterile water (Control-24 h) or 128 µg/mL of cordycepin (Exp-24 h) for 10 min at the 8- and 20-h time points. After the second treatment, the biofilm culture was continued for another 4 h before collection. Biofilm samples were also collected at the 20-h point following the second cordycepin treatment (Exp-20 h). Metabolite extraction, detection, and analysis were performed at Wuhan Metware Biotechnology (Wuhan, China), where the samples had been sent on dry ice. One hundred microliter samples of biofilm were mixed with 300 µL of methanol for 1 min. The samples were then homogenized using a TissueLyser (Jingxin Company, Shanghai) for 5 min at 50 Hz. After being frozen for 2 h at − 20 °C, the samples were centrifuged at 4 °C for 15 min at 25,000 g. This was followed by Liquid chromatograph mass spectrometer (LC-MS) analysis. Sextuplicate samples were used. Ten-microliter aliquots of supernatant were removed from each sample and incorporated into quality control (QC) samples to monitor the repeatability of the analysis process. Metabolomics detection A 1290 Infinity ultraperformance liquid chromatography (UPLC) system (Agilent, USA) with a Waters Acquity UPLC HSS T3 C18 column (100 × 2.1 mm, 1.8 μm) was used for chromatographic separation. The column was maintained at 40 °C with a 2-µL injection volume for each sample and 0.4 mL/min flow rate. The mobile phases were (A) water and (B) acetonitrile both containing 0.1% formic acid (Thermo Fisher). The column was eluted with 5% mobile phase B and 95% mobile phase A, which was then followed by a linear gradient to 90% mobile phase B and 10% mobile phase A over 11 min, which was then held for 1 min before being brought back down to 5% mobile phase B and 95% mobile phase A within 0.1 min and then held for 1.9 min. A 6550 quadrupole time-of-flight mass spectrometer (Agilent) was operated in positive (ESI+) and negative (ESI-) electrospray ion modes. The ion source voltages for ESI + and ESI- modes were 2.5 kV and 1.5 kV, respectively. In ESI + and ESI- mode, both the gas and sheath temperature were set at 325 ℃ and the gas and sheath flows were set at 8 L/min and 11 L/min, respectively. The nebulizer and fragmentation voltages were 40 V and 135 V, respectively. Data processing and analysis methods ProteoWizard was used to transform the original LC-MS data file into the mzML format. Peak extraction, peak alignment, and retention time correction were performed using XCMS software. The “SVR” technique was applied for adjustment of the peak area. Peaks with detection rates below 50% were eliminated from each set of samples. Principal component analysis (PCA) was performed by using the R “prcomp” function. PCA was used for the initial analysis of the separation between the samples of each group and determining whether there were differences within the sample groups, while the QC samples were monitored to determine whether the instrument was stable. Orthogonal partial least squares discriminant analysis (OPLS-DA) was performed using the “MetaboAnalyst” package in R. One of the most common applications of OPLS-DA, a form of supervised multivariate statistical analysis, is the identification of metabolites that are significantly different across sample groups. Moreover, 200-permutation tests were conducted to assess OPLS-DA model overfitting. The prediction parameters R 2 X, R 2 Y, and Q 2 were calculated to evaluate the predictive ability and goodness-of-fit of the OPLS-DA model. Selected differential metabolites The variable importance in projection (VIP) based on the OPLS-DA model is a parameter that shows the importance of a variable in a model. Usually, a metabolite with VIP > 1 is considered to play an important role in distinguishing groups. The screening requirements for differential metabolites between the two groups included a fold-change (FC) value of at least 2.0 or ≤ 0.5, VIP of more than 1, and corrected P- values less than 0.05. Differential metabolites were selected for multi-group analysis if they matched the criteria of VIP > 1 and corrected P -value < 0.05 (one-way ANOVA). Enrichment analysis and KEGG annotation The identified differential metabolites were annotated using the KEGG compound database ( www.kegg.jp ), after which they were mapped using the KEGG Pathway database. Each well of a flat-bottomed 24-well plate was inoculated with 1,000 µL of cell sample (10 7 CFU mL − 1 in fresh 1% BHIS). As described in 2.3, the biofilm was treated with sterile water (Control-24 h) or 128 µg/mL of cordycepin (Exp-24 h) for 10 min at the 8- and 20-h time points. After the second treatment, the biofilm culture was continued for another 4 h before collection. Biofilm samples were also collected at the 20-h point following the second cordycepin treatment (Exp-20 h). Metabolite extraction, detection, and analysis were performed at Wuhan Metware Biotechnology (Wuhan, China), where the samples had been sent on dry ice. One hundred microliter samples of biofilm were mixed with 300 µL of methanol for 1 min. The samples were then homogenized using a TissueLyser (Jingxin Company, Shanghai) for 5 min at 50 Hz. After being frozen for 2 h at − 20 °C, the samples were centrifuged at 4 °C for 15 min at 25,000 g. This was followed by Liquid chromatograph mass spectrometer (LC-MS) analysis. Sextuplicate samples were used. Ten-microliter aliquots of supernatant were removed from each sample and incorporated into quality control (QC) samples to monitor the repeatability of the analysis process. A 1290 Infinity ultraperformance liquid chromatography (UPLC) system (Agilent, USA) with a Waters Acquity UPLC HSS T3 C18 column (100 × 2.1 mm, 1.8 μm) was used for chromatographic separation. The column was maintained at 40 °C with a 2-µL injection volume for each sample and 0.4 mL/min flow rate. The mobile phases were (A) water and (B) acetonitrile both containing 0.1% formic acid (Thermo Fisher). The column was eluted with 5% mobile phase B and 95% mobile phase A, which was then followed by a linear gradient to 90% mobile phase B and 10% mobile phase A over 11 min, which was then held for 1 min before being brought back down to 5% mobile phase B and 95% mobile phase A within 0.1 min and then held for 1.9 min. A 6550 quadrupole time-of-flight mass spectrometer (Agilent) was operated in positive (ESI+) and negative (ESI-) electrospray ion modes. The ion source voltages for ESI + and ESI- modes were 2.5 kV and 1.5 kV, respectively. In ESI + and ESI- mode, both the gas and sheath temperature were set at 325 ℃ and the gas and sheath flows were set at 8 L/min and 11 L/min, respectively. The nebulizer and fragmentation voltages were 40 V and 135 V, respectively. ProteoWizard was used to transform the original LC-MS data file into the mzML format. Peak extraction, peak alignment, and retention time correction were performed using XCMS software. The “SVR” technique was applied for adjustment of the peak area. Peaks with detection rates below 50% were eliminated from each set of samples. Principal component analysis (PCA) was performed by using the R “prcomp” function. PCA was used for the initial analysis of the separation between the samples of each group and determining whether there were differences within the sample groups, while the QC samples were monitored to determine whether the instrument was stable. Orthogonal partial least squares discriminant analysis (OPLS-DA) was performed using the “MetaboAnalyst” package in R. One of the most common applications of OPLS-DA, a form of supervised multivariate statistical analysis, is the identification of metabolites that are significantly different across sample groups. Moreover, 200-permutation tests were conducted to assess OPLS-DA model overfitting. The prediction parameters R 2 X, R 2 Y, and Q 2 were calculated to evaluate the predictive ability and goodness-of-fit of the OPLS-DA model. The variable importance in projection (VIP) based on the OPLS-DA model is a parameter that shows the importance of a variable in a model. Usually, a metabolite with VIP > 1 is considered to play an important role in distinguishing groups. The screening requirements for differential metabolites between the two groups included a fold-change (FC) value of at least 2.0 or ≤ 0.5, VIP of more than 1, and corrected P- values less than 0.05. Differential metabolites were selected for multi-group analysis if they matched the criteria of VIP > 1 and corrected P -value < 0.05 (one-way ANOVA). The identified differential metabolites were annotated using the KEGG compound database ( www.kegg.jp ), after which they were mapped using the KEGG Pathway database. Nine genes ( guaA , guaB , hprT , purA , purB , apt , deoD , punA , and add ) found to be associated with the differential metabolites were selected for quantitative real-time PCR verification. The grouping and biofilm preparation time points used for this experiment were consistent with those used for the metabolomics analysis. Total RNA was extracted from three groups of biofilm samples using an miRNeasy Mini Kit (Qiagen, Germany). cDNA synthesis and quantitative real-time PCR analysis were performed using the PrimeScript™ RT reagent Kit (Takara Cat RR037A) and TB Green ® Premix Ex Taq ™ II kit (Takara Cat RR820A), respectively. The amplification procedures were as follows: 95 °C for 30 s; 45 cycles of 95 °C for 5 s, and 60 °C for 30 s. The sequences of the primers designed by Primer 3.0 software are shown in Table , and 16 S rRNA was used as the endogenous control. The mRNA expression levels of related genes in the three groups were analyzed by the threshold cycle method (2 −ΔΔCT ). Each experiment was performed with triplicate samples at each time point. The results correspond to three experiments independently. Statistical analysis was performed using IBM SPSS 25.0 (IBM, Armonk, NY, USA). The data were assessed to determine whether they were normally distributed. Normally distributed data were analyzed using t-tests, while Mann-Whitney U tests were used to compare non-normally distributed data. Differences in the expression of nine genes between the Control-24 h, Exp-20 h, and EXP-24 h groups were analyzed using one-way ANOVA. If the variance was homogeneous, an LSD multiple comparison test was performed, while for non-homogenous variances, Tamhane’s T2 multiple comparison test was used. For metabolome analysis, see 2.7.3 and 2.7.4 above. The level of statistical significance was set at P < 0.05. Effects of cordycepin on bacterial growth and biofilm and its cytotoxicity No effective MIC was found within the set range (0, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1,024, 2,048, 4,096 µg/mL) (Fig. A). However, it was found that twice-daily treatment with a certain concentration of cordycepin effectively reduced the biofilm biomass. Although 32 and 64 µg/mL of cordycepin reduced biofilm biomass compared to sterile water (negative control), there was no significant difference compared to 1.5% arginine (positive control), so we chose the concentration of 128 µg/mL, which was significantly different compared to both negative and positive controls, for the subsequent experiments (Fig. B). MTT assay was used to evaluate the metabolic activity of S. mutans biofilm. As shown in Fig. C, the metabolic activity of the biofilm decreased to 75.452 ± 3.862% after 128 µg/mL cordycepin treatment and the difference was statistically significant ( P < 0.001). This result indicates that cordycepin inhibited the metabolic activity of S. mutans to reduce the biomass of biofilm. The cytotoxicity of cordycepin was evaluated by testing the viability of HOK cells. As shown in Fig. D, treatment with 128 µg/ml cordycepin for 24 h had no significant effect on HOK cells ( P = 0.916). Therefore, 128 µg/ml cordycepin was used for subsequent experiments. Effect of cordycepin on the live bacterial rate and volume of S. mutans biofilm Planar and 3D images of biofilm were obtained by confocal laser scanning microscopy and the live bacterial rate and the volume of S. mutans biofilm were analyzed. The green color of image was for live bacteria, red color was for dead bacteria. In the absence of cordycepin, the biofilm structure was relatively dense and substantial, while after cordycepin treatment, the structure appeared looser and more dispersed (Fig. A). Meanwhile, the biofilm volume decreased, changing from (1.447 ± 0.117 × 10 8 ) µm3 to (1.233 ± 0.084 × 10 8 ), with a statistically significant difference ( P < 0.001) (Fig. B). However, the live bacterial rate did not change significantly after cordycepin treatment ( P = 0.551) (Fig. C), indicating that cordycepin did not reduce the biomass of the biofilm by killing bacteria. Metabolic changes Metabolic profile summaries To explore the effects of cordycepin on S. mutans biofilm, metabolomic analysis of biofilm samples from different groups was performed. To further observe the immediate metabolic changes in S. mutans after cordycepin treatment, an immediate treatment group (Exp-20 h) was included in the metabolomics analysis. The three groups were Control-24 h, Exp-24 h, and Exp-20 h. The Control-24 h and Exp-24 h groups were treated with sterile water and cordycepin for 10 min, respectively, at the 8-h and 20-h time points, and culture was continued for 4 h. The Exp-20 h samples were collected immediately at the 20-h time point after the second cordycepin treatment. Differences in metabolism were analyzed between the three groups. The total ion chromatograms for the QC samples obtained from the Control-24 h ( n = 6), Exp-20 h ( n = 6), and Exp-24 h ( n = 6) group samples in the ESI + and ESI- modes are presented in Appendix Figure (A, B). The figures show a high degree of overlap between the total ion chromatograms is high, indicating good signal stability of the instrument. Appendix Figure (C, D) displays the PCA plots of the Control-24 h, Exp-20 h, Exp-24 h, and QC samples, showing that the QC samples were grouped tightly together and were easily distinguishable from the treatment samples. These results demonstrated the consistency and reproducibility of the LC-MS analyses. The PCA plots showed a clear separation of the Exp-20 h group from the other two groups, while the Control-24 h and Exp-24 h groups overlapped to some extent (Fig. A and B). The OPLS-DA plots clearly distinguished between the groups (Fig. C and D), and the 200-permutation test assessment of the model quality showed accurate prediction and modeling capacity, as shown in Fig. E (ESI + model, R 2 X = 0.739, R 2 Y = 0.976 and Q 2 = 0.906) and Fig. F (ESI- model, R 2 X = 0.789, R 2 Y = 0.987 and Q 2 = 0.893). These findings point to modifications in bacterial metabolites in response to cordycepin treatment. Identification of differential metabolites The screening criteria used for the identification of differential metabolites were FC ≥ 2.0 or ≤ 0.5, VIP of more than 1, and corrected P -values of less than 0.05. The presence of differential metabolites in the Control-24 h vs. Exp-24 h (Cluster A), Control-24 h vs. Exp-20 h (Cluster B), and Exp-24 h vs. Exp-20 h (Cluster C) groups was evaluated using ESI + based on the predetermined criteria. Cluster A was found to have 9 differential metabolites, of which 3 were upregulated and 6 were downregulated (Fig. A), while Cluster B showed 994 differential metabolites, with significant upregulation of 105 and downregulation of 889 metabolites (Fig. C). Cluster C included 978 differential metabolites with 96 upregulated and 882 downregulated (Fig. E). In the ESI- mode, 0, 850, and 832 differential metabolites were identified in Clusters A, B, and C, respectively. Figure D shows that 47 differential metabolites were significantly upregulated while 803 were downregulated in Cluster B. Figure F shows that 792 differential metabolites were significantly downregulated and 40 were upregulated in Cluster C. KEGG annotation and enrichment analysis Assessment of the shared differential metabolites between the three groups i.e., Clusters A, B, and C, identifed four common differential metabolites, namely, 2’-deoxyadenosine, 2’-deoxyinosine, adenine, and allopurinol, in the ESI + mode. No differential metabolites shared between the three clusters were identified in the ESI- mode (Fig. ). The pathways with which these four differential metabolites were associated were then investigated using the S. mutans KEGG database. Allopurinol was not matched with any metabolic pathways, while adenine, 2’-deoxyadenosine, and 2’-deoxyinosine were found to be associated with five metabolic pathways, namely, purine metabolism (SMU00230), nucleotide metabolism (SMU01232), ABC transporters (SMU02010), metabolic pathways (SMU01100), and biosynthesis of secondary metabolites (SMU01110). Of these, purine metabolism (SMU00230) and nucleotide metabolism (SMU01232), both of which showed low P -values and high scores, were considered worthy of further attention (Fig. ) and the metabolite levels associated with purine metabolism and nucleotide metabolism were assessed between the three groups. Immediately after treatment with cordycepin (Exp-20 h), the contents of 2’-deoxyadenosine, 2’-deoxyinosine, and adenine were found to be significantly upregulated, while after continued culture (Exp-24 h), the levels of the three metabolites decreased markedly. Moreover, the levels of both 2’-deoxyadenosine and 2’-deoxyinosine were higher than their original levels, while that of adenine was lower than the original level (Fig. A-C). In addition, metabolites related to purine and nucleotide metabolism pathways, including adenosine, inosine, xanthosine, guanosine, adenosine monophosphate (AMP), inosine monophosphate (IMP), xanthosine monophosphate (XMP), guanosine monophosphate (GMP), hypoxanthine, xanthine, cyclic AMP, and cyclic GMP, were also investigated (Fig. D-O). Although these metabolites were not part of the shared differential metabolites between Clusters A, B, and C, they were found to significantly different in Cluster B and Cluster C. After immediate treatment with cordycepin (Exp-20 h), the contents of these metabolites decreased significantly but were markedly raised after continued culture (Exp-24 h). Figure provides a summary of these results. Assessment of gene expression levels The relative expression levels of guaA , guaB , hprT , purA , purB , apt , deoD , punA , and add in the metabolic pathways associated with the differential metabolites listed in Fig. A were analyzed. To explore potential changes in the expression of these genes in biofilms treated with cordycepin, three groups were evaluated, namely, Control-24 h and Exp-24 h in the 24-h biofilm model and Exp-20 h in the 20-h biofilm model. Figure B shows that compared with the Control-24 h group, the expression levels of these genes associated with purine metabolism and nucleotide metabolism were significantly downregulated after cordycepin treatment (Exp-20 h) but were upregulated after continued culture (Exp-24 h). No effective MIC was found within the set range (0, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1,024, 2,048, 4,096 µg/mL) (Fig. A). However, it was found that twice-daily treatment with a certain concentration of cordycepin effectively reduced the biofilm biomass. Although 32 and 64 µg/mL of cordycepin reduced biofilm biomass compared to sterile water (negative control), there was no significant difference compared to 1.5% arginine (positive control), so we chose the concentration of 128 µg/mL, which was significantly different compared to both negative and positive controls, for the subsequent experiments (Fig. B). MTT assay was used to evaluate the metabolic activity of S. mutans biofilm. As shown in Fig. C, the metabolic activity of the biofilm decreased to 75.452 ± 3.862% after 128 µg/mL cordycepin treatment and the difference was statistically significant ( P < 0.001). This result indicates that cordycepin inhibited the metabolic activity of S. mutans to reduce the biomass of biofilm. The cytotoxicity of cordycepin was evaluated by testing the viability of HOK cells. As shown in Fig. D, treatment with 128 µg/ml cordycepin for 24 h had no significant effect on HOK cells ( P = 0.916). Therefore, 128 µg/ml cordycepin was used for subsequent experiments. Planar and 3D images of biofilm were obtained by confocal laser scanning microscopy and the live bacterial rate and the volume of S. mutans biofilm were analyzed. The green color of image was for live bacteria, red color was for dead bacteria. In the absence of cordycepin, the biofilm structure was relatively dense and substantial, while after cordycepin treatment, the structure appeared looser and more dispersed (Fig. A). Meanwhile, the biofilm volume decreased, changing from (1.447 ± 0.117 × 10 8 ) µm3 to (1.233 ± 0.084 × 10 8 ), with a statistically significant difference ( P < 0.001) (Fig. B). However, the live bacterial rate did not change significantly after cordycepin treatment ( P = 0.551) (Fig. C), indicating that cordycepin did not reduce the biomass of the biofilm by killing bacteria. Metabolic profile summaries To explore the effects of cordycepin on S. mutans biofilm, metabolomic analysis of biofilm samples from different groups was performed. To further observe the immediate metabolic changes in S. mutans after cordycepin treatment, an immediate treatment group (Exp-20 h) was included in the metabolomics analysis. The three groups were Control-24 h, Exp-24 h, and Exp-20 h. The Control-24 h and Exp-24 h groups were treated with sterile water and cordycepin for 10 min, respectively, at the 8-h and 20-h time points, and culture was continued for 4 h. The Exp-20 h samples were collected immediately at the 20-h time point after the second cordycepin treatment. Differences in metabolism were analyzed between the three groups. The total ion chromatograms for the QC samples obtained from the Control-24 h ( n = 6), Exp-20 h ( n = 6), and Exp-24 h ( n = 6) group samples in the ESI + and ESI- modes are presented in Appendix Figure (A, B). The figures show a high degree of overlap between the total ion chromatograms is high, indicating good signal stability of the instrument. Appendix Figure (C, D) displays the PCA plots of the Control-24 h, Exp-20 h, Exp-24 h, and QC samples, showing that the QC samples were grouped tightly together and were easily distinguishable from the treatment samples. These results demonstrated the consistency and reproducibility of the LC-MS analyses. The PCA plots showed a clear separation of the Exp-20 h group from the other two groups, while the Control-24 h and Exp-24 h groups overlapped to some extent (Fig. A and B). The OPLS-DA plots clearly distinguished between the groups (Fig. C and D), and the 200-permutation test assessment of the model quality showed accurate prediction and modeling capacity, as shown in Fig. E (ESI + model, R 2 X = 0.739, R 2 Y = 0.976 and Q 2 = 0.906) and Fig. F (ESI- model, R 2 X = 0.789, R 2 Y = 0.987 and Q 2 = 0.893). These findings point to modifications in bacterial metabolites in response to cordycepin treatment. Identification of differential metabolites The screening criteria used for the identification of differential metabolites were FC ≥ 2.0 or ≤ 0.5, VIP of more than 1, and corrected P -values of less than 0.05. The presence of differential metabolites in the Control-24 h vs. Exp-24 h (Cluster A), Control-24 h vs. Exp-20 h (Cluster B), and Exp-24 h vs. Exp-20 h (Cluster C) groups was evaluated using ESI + based on the predetermined criteria. Cluster A was found to have 9 differential metabolites, of which 3 were upregulated and 6 were downregulated (Fig. A), while Cluster B showed 994 differential metabolites, with significant upregulation of 105 and downregulation of 889 metabolites (Fig. C). Cluster C included 978 differential metabolites with 96 upregulated and 882 downregulated (Fig. E). In the ESI- mode, 0, 850, and 832 differential metabolites were identified in Clusters A, B, and C, respectively. Figure D shows that 47 differential metabolites were significantly upregulated while 803 were downregulated in Cluster B. Figure F shows that 792 differential metabolites were significantly downregulated and 40 were upregulated in Cluster C. KEGG annotation and enrichment analysis Assessment of the shared differential metabolites between the three groups i.e., Clusters A, B, and C, identifed four common differential metabolites, namely, 2’-deoxyadenosine, 2’-deoxyinosine, adenine, and allopurinol, in the ESI + mode. No differential metabolites shared between the three clusters were identified in the ESI- mode (Fig. ). The pathways with which these four differential metabolites were associated were then investigated using the S. mutans KEGG database. Allopurinol was not matched with any metabolic pathways, while adenine, 2’-deoxyadenosine, and 2’-deoxyinosine were found to be associated with five metabolic pathways, namely, purine metabolism (SMU00230), nucleotide metabolism (SMU01232), ABC transporters (SMU02010), metabolic pathways (SMU01100), and biosynthesis of secondary metabolites (SMU01110). Of these, purine metabolism (SMU00230) and nucleotide metabolism (SMU01232), both of which showed low P -values and high scores, were considered worthy of further attention (Fig. ) and the metabolite levels associated with purine metabolism and nucleotide metabolism were assessed between the three groups. Immediately after treatment with cordycepin (Exp-20 h), the contents of 2’-deoxyadenosine, 2’-deoxyinosine, and adenine were found to be significantly upregulated, while after continued culture (Exp-24 h), the levels of the three metabolites decreased markedly. Moreover, the levels of both 2’-deoxyadenosine and 2’-deoxyinosine were higher than their original levels, while that of adenine was lower than the original level (Fig. A-C). In addition, metabolites related to purine and nucleotide metabolism pathways, including adenosine, inosine, xanthosine, guanosine, adenosine monophosphate (AMP), inosine monophosphate (IMP), xanthosine monophosphate (XMP), guanosine monophosphate (GMP), hypoxanthine, xanthine, cyclic AMP, and cyclic GMP, were also investigated (Fig. D-O). Although these metabolites were not part of the shared differential metabolites between Clusters A, B, and C, they were found to significantly different in Cluster B and Cluster C. After immediate treatment with cordycepin (Exp-20 h), the contents of these metabolites decreased significantly but were markedly raised after continued culture (Exp-24 h). Figure provides a summary of these results. Assessment of gene expression levels The relative expression levels of guaA , guaB , hprT , purA , purB , apt , deoD , punA , and add in the metabolic pathways associated with the differential metabolites listed in Fig. A were analyzed. To explore potential changes in the expression of these genes in biofilms treated with cordycepin, three groups were evaluated, namely, Control-24 h and Exp-24 h in the 24-h biofilm model and Exp-20 h in the 20-h biofilm model. Figure B shows that compared with the Control-24 h group, the expression levels of these genes associated with purine metabolism and nucleotide metabolism were significantly downregulated after cordycepin treatment (Exp-20 h) but were upregulated after continued culture (Exp-24 h). To explore the effects of cordycepin on S. mutans biofilm, metabolomic analysis of biofilm samples from different groups was performed. To further observe the immediate metabolic changes in S. mutans after cordycepin treatment, an immediate treatment group (Exp-20 h) was included in the metabolomics analysis. The three groups were Control-24 h, Exp-24 h, and Exp-20 h. The Control-24 h and Exp-24 h groups were treated with sterile water and cordycepin for 10 min, respectively, at the 8-h and 20-h time points, and culture was continued for 4 h. The Exp-20 h samples were collected immediately at the 20-h time point after the second cordycepin treatment. Differences in metabolism were analyzed between the three groups. The total ion chromatograms for the QC samples obtained from the Control-24 h ( n = 6), Exp-20 h ( n = 6), and Exp-24 h ( n = 6) group samples in the ESI + and ESI- modes are presented in Appendix Figure (A, B). The figures show a high degree of overlap between the total ion chromatograms is high, indicating good signal stability of the instrument. Appendix Figure (C, D) displays the PCA plots of the Control-24 h, Exp-20 h, Exp-24 h, and QC samples, showing that the QC samples were grouped tightly together and were easily distinguishable from the treatment samples. These results demonstrated the consistency and reproducibility of the LC-MS analyses. The PCA plots showed a clear separation of the Exp-20 h group from the other two groups, while the Control-24 h and Exp-24 h groups overlapped to some extent (Fig. A and B). The OPLS-DA plots clearly distinguished between the groups (Fig. C and D), and the 200-permutation test assessment of the model quality showed accurate prediction and modeling capacity, as shown in Fig. E (ESI + model, R 2 X = 0.739, R 2 Y = 0.976 and Q 2 = 0.906) and Fig. F (ESI- model, R 2 X = 0.789, R 2 Y = 0.987 and Q 2 = 0.893). These findings point to modifications in bacterial metabolites in response to cordycepin treatment. The screening criteria used for the identification of differential metabolites were FC ≥ 2.0 or ≤ 0.5, VIP of more than 1, and corrected P -values of less than 0.05. The presence of differential metabolites in the Control-24 h vs. Exp-24 h (Cluster A), Control-24 h vs. Exp-20 h (Cluster B), and Exp-24 h vs. Exp-20 h (Cluster C) groups was evaluated using ESI + based on the predetermined criteria. Cluster A was found to have 9 differential metabolites, of which 3 were upregulated and 6 were downregulated (Fig. A), while Cluster B showed 994 differential metabolites, with significant upregulation of 105 and downregulation of 889 metabolites (Fig. C). Cluster C included 978 differential metabolites with 96 upregulated and 882 downregulated (Fig. E). In the ESI- mode, 0, 850, and 832 differential metabolites were identified in Clusters A, B, and C, respectively. Figure D shows that 47 differential metabolites were significantly upregulated while 803 were downregulated in Cluster B. Figure F shows that 792 differential metabolites were significantly downregulated and 40 were upregulated in Cluster C. Assessment of the shared differential metabolites between the three groups i.e., Clusters A, B, and C, identifed four common differential metabolites, namely, 2’-deoxyadenosine, 2’-deoxyinosine, adenine, and allopurinol, in the ESI + mode. No differential metabolites shared between the three clusters were identified in the ESI- mode (Fig. ). The pathways with which these four differential metabolites were associated were then investigated using the S. mutans KEGG database. Allopurinol was not matched with any metabolic pathways, while adenine, 2’-deoxyadenosine, and 2’-deoxyinosine were found to be associated with five metabolic pathways, namely, purine metabolism (SMU00230), nucleotide metabolism (SMU01232), ABC transporters (SMU02010), metabolic pathways (SMU01100), and biosynthesis of secondary metabolites (SMU01110). Of these, purine metabolism (SMU00230) and nucleotide metabolism (SMU01232), both of which showed low P -values and high scores, were considered worthy of further attention (Fig. ) and the metabolite levels associated with purine metabolism and nucleotide metabolism were assessed between the three groups. Immediately after treatment with cordycepin (Exp-20 h), the contents of 2’-deoxyadenosine, 2’-deoxyinosine, and adenine were found to be significantly upregulated, while after continued culture (Exp-24 h), the levels of the three metabolites decreased markedly. Moreover, the levels of both 2’-deoxyadenosine and 2’-deoxyinosine were higher than their original levels, while that of adenine was lower than the original level (Fig. A-C). In addition, metabolites related to purine and nucleotide metabolism pathways, including adenosine, inosine, xanthosine, guanosine, adenosine monophosphate (AMP), inosine monophosphate (IMP), xanthosine monophosphate (XMP), guanosine monophosphate (GMP), hypoxanthine, xanthine, cyclic AMP, and cyclic GMP, were also investigated (Fig. D-O). Although these metabolites were not part of the shared differential metabolites between Clusters A, B, and C, they were found to significantly different in Cluster B and Cluster C. After immediate treatment with cordycepin (Exp-20 h), the contents of these metabolites decreased significantly but were markedly raised after continued culture (Exp-24 h). Figure provides a summary of these results. The relative expression levels of guaA , guaB , hprT , purA , purB , apt , deoD , punA , and add in the metabolic pathways associated with the differential metabolites listed in Fig. A were analyzed. To explore potential changes in the expression of these genes in biofilms treated with cordycepin, three groups were evaluated, namely, Control-24 h and Exp-24 h in the 24-h biofilm model and Exp-20 h in the 20-h biofilm model. Figure B shows that compared with the Control-24 h group, the expression levels of these genes associated with purine metabolism and nucleotide metabolism were significantly downregulated after cordycepin treatment (Exp-20 h) but were upregulated after continued culture (Exp-24 h). S. mutans contributes to dental caries and its main virulence factors are the ability to form biofilms attached to tooth surfaces, the ability to produce organic acids, and the ability to survive in low PH conditions . S. mutans can rapidly regulate the formation of cariogenic biofilms in the presence of sucrose in the diet, and it can use glucosyltransferases to synthesise glucan using sucrose as a substrate . This process facilitates the adhesion and colonization of S. mutans on the surface of tooth hard tissue and the production of extracellular polymeric substances . The extracellular polymeric substances produced by S. mutans include extracellular polysaccharides, extracellular DNA and lipoteichoic acids. Extracellular polysaccharides promote localized accumulation and aggregation of other microorganisms, and extracellular DNA and lipoteichoic acids are involved in the formation and maturation of extracellular polymeric substances structures . Fully mature biofilms can provide shelter for microorganisms, which can lead to dental caries . As a natural product with many pharmacological activities, cordycepin has not been used in caries prevention and control . However, it has been reported that cordycepin can effectively eradicate mature Candida albicans biofilms by reducing the metabolic activity of the microorganisms and destroying the extracellular polysaccharides present in the biofilm . Symbiosis between Candida albicans and S. mutans in the presence of sucrose can promote the formation of biofilm and aggravate caries . Therefore, if cordycepin can prevent S. mutans biofilm formation. It has the potential to be an anti-caries agent. In the study, the 24 h-biofilm was treated with cordycepin at two time points (8 and 20 h) to simulate an environment wherein someone might use mouthwash twice per day. We found that in this treatment mode, 128 µg/mL cordycepin resulted in a significant reduction of 24 h-biofilm biomass and metabolism. However, we did not find the MIC of cordycepin against S. mutans . Moreover, from the confocal laser scanning microscopy results, cordycepin treatment did not change the the live bacterial rate in the biofilm, but the biofilm structure appeared looser and more dispersed. Therefore, we suggest that cordycepin may decreased the biomass of biofilm by affecting the aggregation and adhesion of bacteria rather than by killing them. In recent years, metabolomics has played a key role in the identification of potential biomarkers for caries, as well as the prediction of caries risk and investigation of the microbiological mechanisms associated with caries formation . Since treatment with cordycepin at the 20 h time point required another 4 h incubation, we speculated that the metabolic changes of cordycepin might not be significant at that time. To explore the effects of cordycepin on S. mutans biofilm, metabolomic analysis of biofilm samples from different groups was performed. To further observe the immediate metabolic changes in S. mutans after cordycepin treatment, an immediate treatment group (Exp-20 h) was included in the metabolomics analysis. The three groups were Control-24 h, Exp-24 h, and Exp-20 h. The metabolites present in the different groups were then identified through metabolomics analysis and matched against the S. mutans KEGG database. This showed that the differential metabolites were mainly involved in pathways associated with purine metabolism, nucleotide metabolism, ABC transporters, metabolic pathways, and biosynthesis of secondary metabolites, of which the purine metabolism and nucleotide metabolism pathways were the most significant. Purine metabolism and nucleotide metabolites not only provide energy sources for bacterial growth but also participate in the regulation of biofilm formation . Therefore, we focused on the levels of differential metabolites involved in the purine and nucleotide metabolic pathways in the groups and measured the expression levels of related genes by quantitative real-time PCR. After immediate treatment with cordycepin (Exp-20 h), significant down-regulation of cyclic AMP and cyclic GMP in the purine metabolic pathway was observed. As second messengers, both cyclic AMP and cyclic GMP can regulate the physiological processes of bacteria, allowing them to rapidly respond to changes in the external environment ; this ability is particularly important for the early aggregation and adhesion of planktonic bacteria leading to biofilm formation . In addition, cyclic AMP also plays a central role in regulating biofilm formation and the expression of virulence genes . Although both cyclic AMP and cyclic GMP were upregulated after 4 h culture, cordycepin may have affected the initial aggregation and adhesion of the bacteria, resulting in a decrease in the biofilm biomass. In addition, after immediate treatment with cordycepin (Exp-20 h), metabolites such as IMP, AMP, GMP, XMP, inosine, adenosine, guanosine, xanthosine, hypoxanthine, and xanthine were significantly downregulated. The rate of microbial production of GMP and AMP was related directly to the bacterial growth rate . AMP and GMP can be converted into ATP and GTP, respectively, providing energy for bacterial DNA replication and RNA transcription . IMP is the precursor of AMP and GMP synthesis , and can produce GMP through the double reaction of IMP dehydrogenase (encoded by guaB ) and GMP synthase (encoded by guaA ), and AMP through the actions of adenylosuccinate synthase (encoded by purA ) and adenylosuccinate lyase (encoded by purB ). IMP can be synthesized in purine nucleotide metabolism in two ways. The first is through the de novo purine nucleotide biosynthesis pathway, using phosphoribosyl pyrophosphate as the initial substrate through a series of enzymatic reactions to form the nucleotide IMP. IMP can also be generated by recycling purine bases or nucleosides from the environment through the purine salvage pathway . Compared with the de novo purine nucleotide biosynthesis pathway, the purine salvage pathway is an energy-saving pathway . The purine salvage pathway can be direct, involving the synthesis of GMP from guanine by hypoxanthine phosphoribosyl transferase (encoded by hprT ), and synthesis of AMP from adenine by adenine phosphoribosyl transferase (encoded by apt ). The purine salvage pathway can also be indirect, involving the synthesis of IMP and XMP from hypoxanthine and xanthine, respectively, and the synthesis of AMP and GMP by IMP and XMP. In addition, purine-nucleoside phosphorylase (encoded by deoD or punA ) promotes transformations between purine bases and nucleosides, and adenosine deaminase (encoded by add ) allows nucleotide cycling by creating a bypass in the pathway, thus allowing various purine bases and nucleosides to be used to produce GMP and AMP . Leonard et al. reported that reductions in bacterial nucleotides and increases in purine bases were common after treatment with multiple antibiotics . In our study, after immediate treatment with cordycepin (Exp-20 h), the levels of nucleotides such as AMP, IMP, GMP, and XMP, and nucleosides such as adenosine, inosine, guanosine, and xanthosine decreased, while that of adenine increased significantly. The high level of adenine and low levels of nucleotides may be due to an increase in nucleotide degradation or the inhibition of the purine salvage pathway after cordycepin treatment. We further evaluated the expression levels of genes involved in the purine salvage pathway in S. mutans . We were surprised to find that the expression levels of these genes were significantly reduced in the Exp-20 h group, suggesting that cordycepin may block the synthesis of IMP and AMP from adenine, thus inhibiting the energy synthesis necessary for bacterial growth. In addition, studies have shown that the purA and purB genes, which promote the transformation of IMP to AMP, can enhance the ability of bacteria to resist stressful conditions such as exposure to various antibiotics, low pH values, and heat stress . The guaB and guaA genes, which promote the transformation of IMP to GMP, have also been reported to be associated with bacterial biofilm formation . All in all, immediate treatment of cordycepin may inhibit purine metabolism, which may affect the synthesis of energy needed for bacterial growth as well as weaken the ability of the bacteria to resist external stress. However, the effect was reversible. We included another group, which was cultured for 4 h after treatment with cordycepin (Exp-24 h). Unexpectedly, it was found that the genes that were downregulated after cordycepin treatment were upregulated after 4 h of culture, as were the levels of the metabolites adenosine, inosine, guanosine, xanthosine, AMP, IMP, GMP, XMP, cyclic AMP, and cyclic GMP. In addition, compared with the Control-24 h group, the Exp-20 h group showed large numbers of differential metabolites, whereas the Exp-24 h group had few differential metabolites. This suggests that the initial cordycepin treatment resulted in considerable alterations in the levels of bacterial metabolites. However, it is possible that the bacteria are able to repair after continued culture, resulting in the majority of metabolites returning to their previous levels. Although S. mutans has ability to repair, some metabolites, such as 2’-deoxyadenosine, 2’-deoxyinosine, and adenine, did not return to their original levels after continued culture, suggesting that these metabolites may play key roles in cordycepin inhibition of biofilm formation. Compared with the control group, adenine, 2’-deoxyadenosine, and 2’-deoxyinosine were significantly upregulated after immediate treatment with cordycepin. However, the levels of these three compounds were significantly downregulated after continued culture for 4 h. Compared with the control group, adenine was shown to be significantly downregulated, while 2’-deoxyadenosine and 2’-deoxyinosine were significantly upregulated. Rottman et al. reported that Bacillus subtilis was capable of cleaving cordycepin to adenine and cordycepose . Therefore, it is suggested that the increase in adenine content after immediate treatment with cordycepin may have been due to the cleavage of cordycepin by S. mutans. Adenine can further produce 2’-deoxyadenosine. As a nucleoside analog, cordycepin can be used as a substrate for adenosine deaminase (encoded by add ) to produce 3’-deoxyinosine . Cordycepin may partially inhibit the production of 2’-deoxyinosine from 2’-deoxyadenosine by competitive binding to adenosine deaminase, thus increasing the level of 2’-deoxyadenosine. The structure of 3’-deoxyinosine produced by cordycepin was similar to 2’-deoxyinosine, which may inhibit the further synthesis of 2’-deoxyinosine into other substances, resulting in an increase in the content of 2’-deoxyinosine. After continued culture, adenine was sharply downregulated while 2’-deoxyadenosine and 2’-deoxyinosine were upregulated. We speculated that after the consumption of cordycepin that can be cleaved, the content of adenine would not increase. At the same time, after the consumption of cordycepin that played a competitive inhibitory role, purine and nucleotide metabolism was restored. Adenine can produce 2’-deoxyadenosine and 2’-deoxyinosine, and 2’-deoxyinosine further production of various nucleosides and nucleotides. However, the levels of 2-deoxyinosine remained high after continued culture. It is known that 2’-deoxyinosine is an aberrant nucleoside that can be incorporated into DNA, adversely affecting nucleotide metabolism and causing irreversible DNA damage . Irreversible damage to bacterial DNA may affect S. mutans production of extracellular DNA, which is essential for mature biofilm structure . Furthermore, Jiang et al. reported that cordycepin can be inserted into DNA bases and double strands, altering the conformation and structure of DNA, and thus interfering with cell activity and ultimately causing death in Escherichia coli and Bacillus subtilis . Therefore, we speculated that cordycepin may affect S. mutans biofilm by interfering with purine and nucleotide metabolism, altering the conformation and structure of the bacterial DNA, and interfering with bacterial metabolic activity.However, these are only our speculation and needs to be subsequently verified by experiments. The study found that cordycepin reduced both the biomass and metabolic activity of the biofilm without killing the bacteria. Furthermore, the metabolic changes in biofilm-associated S. mutans in response to cordycepin treatment were evaluated. Nevertheless, this is a preliminary study. The experiments could not fully reproduce the cariogenic environment of the oral cavity and the clinical application of cordycepin. In addition, the exploration of the mechanism involved remained at the level of metabolomics and did not verify the metabolites that may cause the reduction in the S. mutans biofilm, which will be the focus of our future research. In conclusion, cordycepin was found to have anti-biofilm effects. Further studies based on non-targeted metabolomics suggested that the anti-biofilm effect of cordycepin was related to purine and nucleotide metabolic pathways. We consider cordycepin to be a potential caries preventive agent, and further work is needed to determine its efficacy and safety in vivo.
Optimized laboratory techniques for assessing the quality of pre-stripped DMEK grafts
00ea003f-e8a4-436c-b0f4-c88b616032dc
11876697
Surgical Procedures, Operative[mh]
Corneal transplantation is a globally prominent procedure, with penetrating keratoplasty (PK), Descemet’s stripping automatized endothelial keratoplasty (DSAEK), and Descemet’s membrane endothelial keratoplasty (DMEK) being the predominant techniques, constituting 98.5% of all corneal surgeries in the United States . DMEK, known for superior outcomes such as enhanced visual recovery, reduced rejection risk, and overall graft survival, has witnessed a surge in popularity – . According to the Eye Bank Association of America’s (EBAA) 2022 report, DMEK rose from 6% of all corneal transplants in 2014 to 32% in 2022 . This trend is consistent in Europe, where DMEK accounted for 29.7% of corneal transplants in 2018 , and notably in Germany, where DMEK became the predominant keratoplasty procedure, escalating from 53% in 2016 to 65% in 2021 . In recent years, eye banks have increasingly prepared DMEK grafts, aiming to streamline surgery and minimize the risk of procedure cancellations due to potential inoperable DMEK preparations in the operating room. There are two eye bank preparation methods: pre-stripped DMEK grafts, still attached to the cornea, and preloaded grafts completely detached and housed in a cartridge or injector. According to the 2019 report from the European Eye Bank Association (EEBA), pre-stripped DMEK grafts numbered 2476 (9.8%), while preloaded grafts were 349 (1.4%) of total distributed corneal grafts. In 2022, pre-stripped grafts increased to 3610 (13%), and preloaded grafts rose to 698 (2.5%) , . The DMEK graft, comprising corneal endothelial cells (CECs) on a 5–7 µM thick Descemet’s membrane (DM), requires meticulous preparation to avoid DM tears and CEC loss. Given their crucial role in the success of surgery and graft survival, training and validating technicians or surgeons in graft preparation is vital. An accurate method for assessing DMEK grafts is essential to address evolving needs. In addition, ongoing efforts focusing on innovative techniques for peeling and preserving DMEK grafts also require such a tool. Assessing the quality of corneal grafts in surgeries like PK, DSAEK, and DMEK relies heavily on endothelial quality, a crucial parameter. Endothelial cell density (ECD) serves as the primary criterion for assessing the quality of a corneal graft. The ECD obtained in eye bank, using a specular or light microscope without any cell staining, is currently the sole measure available for clinical use. While this measurement procedure is harmless to cells and tissue, its reliability is yet to be optimized. Viability assay/testing based on Calcein-AM, often co-stained with Hoechst and occasionally ethidium homodimer (Fig. A), is the prevailing laboratory technique for evaluating endothelial quality by providing highly reliable ECD – . Calcein-AM, a cytoplasmic dye specific for living cells (Fig. A), permeates plasma membranes due to its lipophilic nature. Initially non-fluorescent, Calcein-AM is transformed into fluorescent calcein within cells through the activity of intracellular esterases. Endothelial viability is determined as the percentage of the surface covered by viable/fluorescent CECs on the DM. To prevent endothelial damage caused by flat-mounting of corneal tissue, images are captured on intact corneas using a fluorescence macroscope, and a 3D reconstructed surface is generated through extended depth of field (EDF) reconstruction (Fig. B). Hoechst 33,342, staining cell nuclei with a preference for living cells (Fig. A), although some dying cells in the early phase can also be found positive . This dye enables more reliable ECD counting by automating the enumeration of thousands of cell nuclei, compared to ECD counting without dye in eye banks, which typically involves only dozens to hundreds of cells . The most crucial outcome provided by viability testing is the viable ECD, which proves to be more reliable than the ECD obtained in the eye bank, accurately representing the actual living CECs transplanted in patients . Viable ECD is calculated by multiplying the ECD obtained from Hoechst staining with the endothelial viability obtained from calcein staining. Ethidium homodimer, a DNA intercalant, stains dead cell nuclei, providing insights into the dying cells during the assay (Fig. A). The ECD obtained through Hoechst staining should only include viable cells. However, it’s possible to observe some dying cells that are positive for both Hoechst and Ethidium . The combined use of Hoechst and ethidium ensures accurate ECD counts by excluding the counting of dead cells. Corneal graft viability assay presents two major drawbacks. When calcein is used to assess pre-stripped DMEK grafts while still attached to the underlying cornea, the lack of contrast in calcein staining makes viability assessment particularly challenging. Detaching the DMEK graft from the cornea, moving it, and mounting it flat on a slide restores the contrast in calcein staining, but these additional manipulations induce further endothelial damage (Fig. C). As a consequence, viability results may no longer accurately reflect the initial state of the endothelium on a pre-stripped graft. This accuracy is essential for a precise assessment of the peeling technique or the method used to preserve the graft in the pre-stripped state. The second major drawback is that immunofluorescence (IF) becomes unfeasible after viability testing due to the residual fluorescence of calcein-AM and ethidium. This limitation leads to the consumption of more corneas, which are valuable human tissues. The objective of this study was to provide simple yet efficient technical guidance to address these two drawbacks and improve the method for assessing corneal graft quality. Influence of the concentration and diluent of calcein-AM on its staining and on CECs Influence on staining. Four different conditions were tested for each of five isolated DMEK grafts. The objective of this step was to increase the intensity of calcein fluorescence in the endothelium. Each graft was divided into four parts and incubated with 2 µM or 4 µM Calcein-AM in PBS or Opti-MEM for 45 min. A drop of BSS was placed on an uncoated slide. The four quarters of the same DM, treated in different ways, were then gently placed with the endothelium facing up using toothless forceps. Excess liquid (BSS) was removed with an ophthalmic sponge, and a few drops of viscoelastic solution (Provisc® OVD) diluted 1:1 in BSS were gently applied on the graft pieces. The slide was immediately photographed under the macroscope. (Fig. A1). Fluorescence intensity was assessed by measuring the mean gray value at 6 different viable zones in each part of the DM using Image J. PBS-2 µM was proposed by the manufacturer and literature and considered as the control condition. To eliminate inter-corneal and inter-experiment variances, the fluorescence intensity of the four differently treated DM parts was standardized by making a ratio with their own control: the DM part stained with PBS-2 µM. Statistical analysis revealed significant improvements in all three tested conditions (Opt-2 µM, PBS-4 µM, Opt-4 µM) compared to the control. Opt-4 µM outperformed both PBS-4 µM ( p = 0.0196) and Opt-2 µM ( p = 0.0049) significantly, demonstrating superior calcein staining efficacy (Fig. A2). Influence of diluent on CECs. During the viability test, the corneal endothelium incubated with Calcein diluted in PBS showed cracks between cell islands consisting of several to several dozen CECs (Fig. A1,B1). This phenomenon did not occur with Opti-MEM. During the evaluation of cell junctions by immunofluorescence (IF) (Fig. B2), we observed disruption of NCAM (basolateral junction) and ZO-1 (apical junction) in the endothelium incubated with PBS. The CECs formed small clusters/islands of cells with large gaps where the cell junctions were completely disrupted. These gaps correspond to the cracks observed with Calcein/PBS staining. Reducing undesirable fluorescence by additional TB staining In this step, we aimed to enhance the endothelium-specific calcein staining quality by minimizing undesirable fluorescence. Calcein-AM 4 µM in Opti-MEM was used for staining the DMEK graft. Following rinsing in BSS, a series of z-photos was taken before and after rapid TB staining, revealing improved contrast in calcein staining. Twenty reconstructed images from ten pre-stripped DMEK grafts were obtained (Fig. A). To evaluate the impact on viability counting accuracy and speed, these photos were anonymized and then distributed to six operators for viability analysis. Two main criteria were considered for evaluation: inter-operator variability and viability counting time. Inter-operator variability was significantly lower ( p = 0.0256%) in the “after TB” group (3.5 ± 1.8%) compared to the “before TB” group (6.1 ± 4.3%) (Fig. B), representing a 43% reduction. Comparing viability counting time, the “After TB” group (5.6 ± 1.7 min) showed a significant decrease ( p < 0.0001%) compared to the “before TB” group (10.3 ± 7.4 min) (Fig. C), marking a 46% reduction. The results indicated that the TB-induced improvement in calcein staining not only enhanced accuracy but also significantly expedited the viability counting process, demonstrating its potential for efficiency in corneal endothelial viability assessments. Other advantages of additional TB staining for assessing the quality of pre-stripped DMEK grafts TB staining effectively highlighted the DM border and attachment zone on the stromal face of the detached DM in pre-stripped DMEK grafts, aiding in evaluating the peeling operator’s technique and graft quality (Fig. A). TB enhanced visibility of the central attachment zone, assisting in identifying correct detachment (DM attachment to the central cornea with a small area) (Fig. A, top and below center). Additionally, TB staining provided clarity on the DM border (Fig. A, right photos). TB’s impact extended to the HEC assay, quenching ethidium staining, improving calcein staining, and enhancing Hoechst staining selectively in viable zones (Fig. B). This facilitated accurate ECD counting, focusing on Hoechst-stained viable cell nuclei. Realization of double IF after endothelial viability staining We performed a double IF utilizing primary antibodies to NCAM, labeled with Alexa Fluor™ 488-conjugated secondary antibody, and ZO-1, labeled with Alexa Fluor™ 555-conjugated secondary antibody. These proteins served as specific markers for CECs, each exhibits characteristic subcellular localizations, with NCAM found on basolateral cell junctions and ZO-1 located at apical cell junctions. To avoid confusion with calcein (cytoplasmic) or ethidium (nuclear) staining, we ensured distinct subcellular localizations. Immediate double IF following the HEC viability assay revealed challenges due to persistent cytoplasmic calcein and nuclear ethidium staining, hindering NCAM and ZO-1 observation (Fig. A). Prolonged PBS washing resulted in the release of the fluorescent dye (that can be observed using the FITC filter), revealing NCAM staining, while ethidium staining persisted (Fig. B). The endothelial viability assay, utilizing the Hoechst and calcein-AM (HC) combination with the omission of ethidium, ultimately enabled the double IF of NCAM and ZO-1 (Fig. C). Influence on staining. Four different conditions were tested for each of five isolated DMEK grafts. The objective of this step was to increase the intensity of calcein fluorescence in the endothelium. Each graft was divided into four parts and incubated with 2 µM or 4 µM Calcein-AM in PBS or Opti-MEM for 45 min. A drop of BSS was placed on an uncoated slide. The four quarters of the same DM, treated in different ways, were then gently placed with the endothelium facing up using toothless forceps. Excess liquid (BSS) was removed with an ophthalmic sponge, and a few drops of viscoelastic solution (Provisc® OVD) diluted 1:1 in BSS were gently applied on the graft pieces. The slide was immediately photographed under the macroscope. (Fig. A1). Fluorescence intensity was assessed by measuring the mean gray value at 6 different viable zones in each part of the DM using Image J. PBS-2 µM was proposed by the manufacturer and literature and considered as the control condition. To eliminate inter-corneal and inter-experiment variances, the fluorescence intensity of the four differently treated DM parts was standardized by making a ratio with their own control: the DM part stained with PBS-2 µM. Statistical analysis revealed significant improvements in all three tested conditions (Opt-2 µM, PBS-4 µM, Opt-4 µM) compared to the control. Opt-4 µM outperformed both PBS-4 µM ( p = 0.0196) and Opt-2 µM ( p = 0.0049) significantly, demonstrating superior calcein staining efficacy (Fig. A2). Influence of diluent on CECs. During the viability test, the corneal endothelium incubated with Calcein diluted in PBS showed cracks between cell islands consisting of several to several dozen CECs (Fig. A1,B1). This phenomenon did not occur with Opti-MEM. During the evaluation of cell junctions by immunofluorescence (IF) (Fig. B2), we observed disruption of NCAM (basolateral junction) and ZO-1 (apical junction) in the endothelium incubated with PBS. The CECs formed small clusters/islands of cells with large gaps where the cell junctions were completely disrupted. These gaps correspond to the cracks observed with Calcein/PBS staining. In this step, we aimed to enhance the endothelium-specific calcein staining quality by minimizing undesirable fluorescence. Calcein-AM 4 µM in Opti-MEM was used for staining the DMEK graft. Following rinsing in BSS, a series of z-photos was taken before and after rapid TB staining, revealing improved contrast in calcein staining. Twenty reconstructed images from ten pre-stripped DMEK grafts were obtained (Fig. A). To evaluate the impact on viability counting accuracy and speed, these photos were anonymized and then distributed to six operators for viability analysis. Two main criteria were considered for evaluation: inter-operator variability and viability counting time. Inter-operator variability was significantly lower ( p = 0.0256%) in the “after TB” group (3.5 ± 1.8%) compared to the “before TB” group (6.1 ± 4.3%) (Fig. B), representing a 43% reduction. Comparing viability counting time, the “After TB” group (5.6 ± 1.7 min) showed a significant decrease ( p < 0.0001%) compared to the “before TB” group (10.3 ± 7.4 min) (Fig. C), marking a 46% reduction. The results indicated that the TB-induced improvement in calcein staining not only enhanced accuracy but also significantly expedited the viability counting process, demonstrating its potential for efficiency in corneal endothelial viability assessments. TB staining effectively highlighted the DM border and attachment zone on the stromal face of the detached DM in pre-stripped DMEK grafts, aiding in evaluating the peeling operator’s technique and graft quality (Fig. A). TB enhanced visibility of the central attachment zone, assisting in identifying correct detachment (DM attachment to the central cornea with a small area) (Fig. A, top and below center). Additionally, TB staining provided clarity on the DM border (Fig. A, right photos). TB’s impact extended to the HEC assay, quenching ethidium staining, improving calcein staining, and enhancing Hoechst staining selectively in viable zones (Fig. B). This facilitated accurate ECD counting, focusing on Hoechst-stained viable cell nuclei. We performed a double IF utilizing primary antibodies to NCAM, labeled with Alexa Fluor™ 488-conjugated secondary antibody, and ZO-1, labeled with Alexa Fluor™ 555-conjugated secondary antibody. These proteins served as specific markers for CECs, each exhibits characteristic subcellular localizations, with NCAM found on basolateral cell junctions and ZO-1 located at apical cell junctions. To avoid confusion with calcein (cytoplasmic) or ethidium (nuclear) staining, we ensured distinct subcellular localizations. Immediate double IF following the HEC viability assay revealed challenges due to persistent cytoplasmic calcein and nuclear ethidium staining, hindering NCAM and ZO-1 observation (Fig. A). Prolonged PBS washing resulted in the release of the fluorescent dye (that can be observed using the FITC filter), revealing NCAM staining, while ethidium staining persisted (Fig. B). The endothelial viability assay, utilizing the Hoechst and calcein-AM (HC) combination with the omission of ethidium, ultimately enabled the double IF of NCAM and ZO-1 (Fig. C). This study addressed two key challenges in evaluating pre-stripped DMEK grafts: improving calcein staining in viability assay and achieving successful IF post-viability assay. To improve calcein staining, we first studied the influence of calcein concentration and diluent. A notable increase in fluorescence intensity was observed using 4 µM Calcein-AM in Opti-MEM compared to 2 µM in PBS. Bhogal et al. reported that a higher concentration of calcein AM can enhance calcein fluorescence intensity . In this study, we demonstrated that changing the diluent from a simple buffer to a nutrient-containing culture medium can also increase calcein fluorescence intensity in viable cells. However, we refrained from further increasing the concentration due to potential cytotoxicity from the DMSO present in the stock solution which had a concentration of 4 mM in DMSO, and 0.1% DMSO is considered safe for various cell types , . Both the Calcein concentration and the diluent affected fluorescence intensity, with 4 µM being more effective than 2 µM in the same diluent, and Opti-MEM outperforming PBS at the same Calcein concentration. We hypothesized that, as a basal culture medium for cultured CECs, Opti-MEM might enhance Calcein fluorescence by preserving esterase activity in CECs. Among the four tested conditions, Opti-MEM at 4 µM yielded the best results. Incubation in PBS induced crackling patterns in the corneal endothelium, attributed to disrupted cell junctions revealed by ZO-1 and NCAM IF. PBS lacks calcium, which is essential for maintaining CEC junctions – . Our findings suggest that the 0.9% NaCl solution, commonly utilized for ECD counting in European eye banks, may enhance the visualization of corneal endothelial cell contours by weakening junctions. However, it could potentially damage the endothelium during the process. The second improvement in calcein staining focused on the effects of post-staining with TB. The introduction of TB staining following viability testing provided multifaceted benefits in assessing pre-stripped DMEK grafts. First, it significantly enhanced calcein staining by mitigating undesirable fluorescence, expediting viability counting, and minimizing inter-operator variability and counting time. Second, TB allowed concurrent visualization of the DM border and central attachment. Third, it aids in viable ECD counting in two ways: (1) it limits Hoechst staining to living cells, as TB staining quenches Hoechst fluorescence in the nuclei of dying cells; (2) It prevents undesirable fluorescence from keratocyte nuclei stained by Hoechst, as TB stains the stromal side of DM and blocks any fluorescence from underlying cells. The main undesirable fluorescence in pre-stripped DMEK grafts arises from the direct exposure of the stroma to Hoechst-Calcein AM solution, leading to strong fluorescence staining of keratocytes. TB addressed this issue by staining the stromal side of the detached DM, blocking stromal fluorescence through its quenching properties , , and enhancing calcein staining on the endothelial side. Moreover, TB not only masks and quenches the non-specific fluorescence of dead cells but also targets the denuded DM zones on the endothelial side, thereby enhancing the fluorescence contrast of viable CECs. In a surgical context, TB could assist eye bank technicians in perfecting techniques for peeling pre-stripped DMEK grafts, ensuring smooth edges and correct center attachment of DM. Additionally, TB aided in determining ECD through Hoechst staining in viable cells. TB is recognized for entering and staining the nuclei of dying cells, thereby suppressing nuclear fluorescence. Consequently, TB ensures that Hoechst nuclear staining is exclusive to living cells. This simplifies ECD counting and makes ethidium unnecessary. The current viable ECD counting is based on endothelial viability and nuclei/Hoechst + counting in five areas of the entire endothelium , making the viable ECD an estimation. Counting all nuclei of viable cells would yield a highly accurate viable ECD. Hoechst alone may stain dying cells in their early stages . However, the addition of TB ensures that Hoechst staining labels only viable cells. This presents the potential to count all viable cell nuclei in the future, once macroscope/microscope technology enables accurate imaging of all Hoechst-stained nuclei in the endothelium. Overall, TB emerged as a valuable tool, not only for optimizing viability and viable ECD assessments but also for offering valuable insights to technicians for their DM peeling technique. The enhancement in calcein contrast provided by TB is essential for capturing viability images of the DMEK graft while it remains attached to the cornea. Bhogal et al. proposed a curved, transparent DMEK graft imaging chamber that enables calcein imaging of a separated DMEK graft, eliminating the need for additional cutting when placing the graft on a glass slide . However, this method still requires expertise from a DMEK ophthalmologist to minimize endothelial damage during graft detachment, displacement, and unfolding. Additionally, as the ‘DMEK graft imaging chamber’ is not commercially available, it can be challenging to obtain. Our approach, involving the simple addition of TB, is easy to implement, requires no advanced technical skills, and allows even novice laboratory technicians to reliably assess DMEK graft viability. IF on flat-mounted corneal endothelium is crucial for detailed CEC status, serving as a valuable tool for evaluating graft preparation methods and corneal graft preservation – . However, persistent calcein and ethidium staining post-viability testing obstructs commonly used IF filters. We propose a new approach, using double labelling with HC and prolonged rinsing in PBS, allowing subsequent IF. Incubation in PBS effectively removes calcein residues from methanol-fixed grafts. TB addressed the omission of ethidium staining, indicative of dying cells, by resolving confusion in Hoechst staining, specifically focusing on viable cells. Calcein-AM, initially a non-fluorescent, lipophilic molecule, transforms into hydrophilic and fluorescent calcein upon membrane diffusion into living cells. If the cell membrane remains intact, hydrophilic calcein is retained in the cytoplasm. To facilitate the elimination of hydrophilic, fluorescent calcein, a permeabilized cell membrane is necessary. Methanol, serving as both a fixative and permeabilizer, does not require additional permeabilization. If you used similar fixatives, such as ethanol or acetone, the situation would be the same as methanol fixation, where a simple PBS rinse would be sufficient to remove hydrophilic calcein from inside the cells. In contrast, formaldehyde-fixed cells require an additional step of cell membrane permeabilization with 0.5% Triton X-100 before PBS rinsing. Ethidium is a DNA intercalator that can enter a dying cell with a damaged plasma membrane. Once fixed to DNA, it is difficult to remove. We have observed in this study that all cell nuclei are ethidium-stained after the IF procedure following viability testing by HEC. We assume that this homogeneous staining of all cell nuclei is a result of ethidium diffusion from dead cell nuclei to neighboring cells after fixation and permeabilization. In addition, TB staining ensures Hoechst staining exclusively in living cells, rendering ethidium unnecessary. As a result, we propose omitting ethidium during viability testing. If your microscope is equipped with a fourth CY5 (far-red) filter, you can perform double immunostaining by selecting a fluorochrome compatible with the CY5 filter for the second antibody (ZO-1 in this study), in cases where ethidium is required for your viability assay. The technical tips described in this study for pre-stripped DMEK grafts are theoretically applicable to other types of corneal grafts, including PK and DSAEK grafts. In our laboratory, we routinely apply these tips for endothelial viability assays on whole corneas. Although the improvements are less pronounced than with pre-stripped DMEK grafts, moderate enhancements in calcein and Hoechst staining still aid in viability and ECD analysis. For viability assessments on separate DMEK grafts, the benefits of calcein staining tips are minimal, as unwanted fluorescence from other parts of the cornea is limited. However, the techniques for eliminating fluorescent calcein are applicable to all types of corneal grafts for subsequent immunofluorescence analysis. In conclusion, we propose assessing the endothelial viability of pre-stripped DMEK grafts through incubation with 4 µM Calcein-AM and Hoechst, both diluted in Opti-MEM, followed by rapid TB staining. This approach offers several advantages: (1) Improved calcein staining, enhancing the reliability and speed of its analysis. (2) Determining DM status and its attachment position. (3) Easy counting of ECD. Additionally, a double IF can be performed after the viability assessing using HC by simply washing the graft in copious PBS for 24 h. Furthermore, we believe that the techniques elucidated in this study are not limited solely to pre-stripped DMEK grafts but are applicable to other graft types, including PK and DSAEK grafts. Human corneas and ethical statement Nineteen human corneas, obtained from donors aged 75 ± 16 (62, 100) years, with a post-mortem interval of 11 ± 6 (4–24) h, were utilized. Stored in organ culture system using culture medium named CorneaMax (EYEMAX00-1 C, Eurobio Scientific) at 31 °C for 19 ± 11 (4–36) days, these corneas were rejected by the Eye Bank of Besancon and Saint Etienne. They were deemed unfit for transplantation and considered as biological waste. BiiO laboratory, authorized by the Ministry of Higher Education, Research, and Innovation of France (Ministère de l’Enseignement Supérieur, de la Recherche et de l’Innovation, MESRI) under dossier number DC-2023–5458, conducted this research on these human corneal wastes without requiring additional ethical approval. All procedures adhered to the Declaration of Helsinki Principles for biomedical research involving human tissue. Preparation of pre-stripped DMEK grafts The DMEK grafts were pre-stripped using a Zeiss operating microscope (Zeiss, Oberkochen, Germany), with the cornea positioned on a Vacuum Donnor Cornea Punch block (K20-2019, Barron). Trabecular meshwork excision in a 360° manner was performed using fine-toothed forceps to create a rupture around the extreme periphery of DM (Fig. A). 0.4% trypan blue (TB) (T8154, Sigma-Aldrich) was diluted to 0.2% in Balanced Salt Solution (BSS). It was applied to enhance DM rupture visualization, and in cases without rupture, fine forceps were used to peel the extreme periphery of DM (Fig. B). Next, a curved, flat-tipped separator (1172, Malosa, BVI Medical) was used to separate the approximately 0.5 mm wide DM border from the corneal stroma along the entire 360° graft edge (Fig. C). TB solution could be reapplied if needed, ensuring DM edge separation. The corneal endothelial side was filled with CorneaMax or BSS to prevent DM folding and tearing. The DM was detached from the underlying stroma halfway across the cornea using straight flat-tip forceps, repeating this process three times at 90° intervals to maintain central attachment (Fig. D). The DM was then repositioned against the stroma by removing liquid from beneath it using a cellulose sponge. Endothelial viability assay and optimizations Standard protocol. A triple labeling protocol, named HEC (Hoechst-Ethidium-Calcein) triple labelling, using Hoechst 33,342, ethidium homodimer, and calcein-AM was devised for assessing corneal endothelial viability and viable ECD , , , . The HEC mixture, comprising 2 µM calcein-AM (FP-FI9820, Interchim), 4 µM ethidium (FP-AT758A, Interchim), and 5 µg/ml Hoechst 33,342(B2261, Sigma-Aldrich), was prepared in phosphate-buffered saline (PBS) (SH30028, HyCloneTM). Corneal grafts were gently rinsed in PBS, placed endothelial side up in a concave-shaped support, and incubated in the dark at room temperature (RT) for 45 min with 150 µl of the HEC mixture. After a brief rinse with PBS, the stained endothelium is ready for observation under a fluorescence macroscope. Calcein staining optimizations. Calcein-AM staining for endothelial viability on pre-stripped DMEK grafts was suboptimal (Fig. C). To enhance staining, we aimed to increase specific calcein staining on the corneal endothelium while reducing undesirable fluorescence. We conducted experiments using Opti-MEM phenol red-free (11058021, Gibco) instead of PBS as the diluent for Calcein-AM staining, with the goal of enhancing fluorescence intensity. Additionally, we explored a higher concentration of Calcein-AM (4 µM) compared to the standard 2 µM. To mitigate undesirable fluorescence, pre-stripped grafts were immersed in 0.2% TB (0.4% TB reduced to 0.2% by adding an equal volume of BSS) on the endothelial side. This step was performed after the viability testing. Gentle agitation was necessary to facilitate TB entry between the detached DM and the stroma, resulting in staining of the stromal side of the DM. After a 20-second incubation, TB was removed, and the graft was immersed in Opti-MEM or BSS to eliminate excess TB. Observation. A fluorescence macroscope (Macro Zoom Fluorescence Microscope System, MVX10, Olympus, Tokyo, Japan), equipped with CellSens imaging systems software (Olympus, Hamburg, Germany) and a DP74 color and monochrome camera was utilized, featuring three fluorescence filter sets: DAPI: Ex/325–375 nm, DM/400 nm, Em/435–485 nm (for Hoechst Staining). FITC: Ex/450–490 nm, DM/495 nm, Em/500–550 nm (for calcein staining). CY3: Ex/520–570 nm, DM/565 nm, Em/570–640 nm (for ethidium staining). For viability measurement on pre-stripped DMEK grafts, a Z-stack of 6 or 7 images, separated by a Z-interval of approximately 600 µM, was acquired across the entire endothelial surface using a 1× objective (MV PLAPo, Olympus) and 0.8× zoom. The EDF plugin of ImageJ reconstructed a single, entirely focused composite image, enhancing accuracy in detecting calcein-positive areas, considering the cornea’s natural curvature (Fig. B). Quantitative assessment of the impact of TB on endothelial viability. Endothelial viability, assessed through calcein images using the CorneaJ plugin in ImageJ, underwent semi-automatic analysis, requiring manual adjustments for accuracy, particularly in cases of suboptimal calcein staining, as previously described . To quantitatively assess the impact of adding TB staining, we utilized 10 pre-stripped DMEK grafts. Two photos were captured for each graft: one before and another after TB staining. All 20 photos were analyzed by six different operators. Operators, unaware of TB treatment, recorded viability and counting time for each photo. Two criteria were employed for comparison before and after TB. Inter-operator variability It is represented by the standard deviation (SD) of the six endothelial viabilities counted by six different operators for each photo. The goal was to assess whether TB staining could reduce the inter-operator variabilities/SD. Counting time It is the time required to analyze the viability of a photo. An enhancement in the viability/calcein image should lead to a reduction in counting time. Immunofluorescence (IF) and optimizations Protocol for pre-stripped DMEK grafts. The IF of flat-mounted corneal endothelium was previously developed and optimized , . After viability assay, the pre-stripped DMEK graft underwent gentle washing in BSS. The DMEK graft, being peeled away from the central attachment but still resting on the cornea, was immersed in 25 ml pure methanol fixative for 30 min at RT. After rehydration in PBS, the DM was separated from the cornea and then incubated in blocking buffer (2% bovine serum albumin (BSA) and 2% goat serum in PBS), at 37 °C for 30 min. A double IF involved incubating the DM in a primary antibody solution [NCAM (Mouse IgG, MAB24081, R&D system) and ZO-1 (rabbit IgG, 40–2200, Invitrogen)] for 1 h at 37 °C. After rinses in PBS, secondary antibody [Alexa Fluor 488 goat anti-mouse IgG (A-11001, Invitrogen) and Alexa Fluor 555 goat anti-rabbit IgG (A-21429, Invitrogen)] incubation followed for 1 h at 37 °C, supplemented with 5 µg/ml DAPI for nuclear counterstaining. The DM could be cut into parts if necessary, with adjusted antibody solution volumes to avoid waste. After further rinses in PBS, the DM was delicately spread on a drop of fluorescent mounting medium (NB-23–00158, NeoBio Mount Fluo, NeoBiotech) deposited on a glass slide. For an entire DM, cutting was necessary to prevent folds, and a glass coverslip was placed over it. It was crucial to avoid microbubbles during this process. Optimizations. To eliminate residual calcein fluorescence within CECs, the methanol treated (fixed and permeabilized) endothelium/DM, was immersed in 25 ml PBS overnight at 4 °C. The ethidium staining is visible using the CY3 filter. Therefore, to immunostain proteins at CY3 wavelengths, it is important to avoid using the ethidium stain. Observation. For IF imaging, an Olympus IX81 fluorescence inverted microscope (Olympus, Tokyo, Japan), equipped with CellSens imaging systems software and a monochrome camera (ORCA-Flash 4.0, Hamamatsu), was employed. Three fluorescence filter sets mirrored those used in the microscope: DAPI: Ex/325–375 nm, DM/400 nm, Em/450–490 nm (for DAPI or Hoechst staining). FITC: Ex/460–495 nm, DM/505 nm, Em/510–550 nm (for secondary antibodies combined with Alexa 488 or calcein staining). Ex/520–560 nm, DM/565 nm, Em/572.5–647.5 nm (for secondary antibodies combined with Alexa 555 or ethidium). Precautionary notes. When performing the IF after the endothelial viability assay on the same graft, it is crucial to take certain precautions during the acquisition of the viability assay photos to prevent any alterations to CEC. Ensure that the endothelium remains moist at all times, and keep the observation and photo-acquisition times within reasonable limits to avoid potential cell phototoxicity. Additionally, refrain from maximizing the intensity of the fluorescence used, whenever possible. Statistic GraphPad Prism used for statistical analysis and graph construction; specific tests mentioned in respective figure legends. Nineteen human corneas, obtained from donors aged 75 ± 16 (62, 100) years, with a post-mortem interval of 11 ± 6 (4–24) h, were utilized. Stored in organ culture system using culture medium named CorneaMax (EYEMAX00-1 C, Eurobio Scientific) at 31 °C for 19 ± 11 (4–36) days, these corneas were rejected by the Eye Bank of Besancon and Saint Etienne. They were deemed unfit for transplantation and considered as biological waste. BiiO laboratory, authorized by the Ministry of Higher Education, Research, and Innovation of France (Ministère de l’Enseignement Supérieur, de la Recherche et de l’Innovation, MESRI) under dossier number DC-2023–5458, conducted this research on these human corneal wastes without requiring additional ethical approval. All procedures adhered to the Declaration of Helsinki Principles for biomedical research involving human tissue. The DMEK grafts were pre-stripped using a Zeiss operating microscope (Zeiss, Oberkochen, Germany), with the cornea positioned on a Vacuum Donnor Cornea Punch block (K20-2019, Barron). Trabecular meshwork excision in a 360° manner was performed using fine-toothed forceps to create a rupture around the extreme periphery of DM (Fig. A). 0.4% trypan blue (TB) (T8154, Sigma-Aldrich) was diluted to 0.2% in Balanced Salt Solution (BSS). It was applied to enhance DM rupture visualization, and in cases without rupture, fine forceps were used to peel the extreme periphery of DM (Fig. B). Next, a curved, flat-tipped separator (1172, Malosa, BVI Medical) was used to separate the approximately 0.5 mm wide DM border from the corneal stroma along the entire 360° graft edge (Fig. C). TB solution could be reapplied if needed, ensuring DM edge separation. The corneal endothelial side was filled with CorneaMax or BSS to prevent DM folding and tearing. The DM was detached from the underlying stroma halfway across the cornea using straight flat-tip forceps, repeating this process three times at 90° intervals to maintain central attachment (Fig. D). The DM was then repositioned against the stroma by removing liquid from beneath it using a cellulose sponge. Standard protocol. A triple labeling protocol, named HEC (Hoechst-Ethidium-Calcein) triple labelling, using Hoechst 33,342, ethidium homodimer, and calcein-AM was devised for assessing corneal endothelial viability and viable ECD , , , . The HEC mixture, comprising 2 µM calcein-AM (FP-FI9820, Interchim), 4 µM ethidium (FP-AT758A, Interchim), and 5 µg/ml Hoechst 33,342(B2261, Sigma-Aldrich), was prepared in phosphate-buffered saline (PBS) (SH30028, HyCloneTM). Corneal grafts were gently rinsed in PBS, placed endothelial side up in a concave-shaped support, and incubated in the dark at room temperature (RT) for 45 min with 150 µl of the HEC mixture. After a brief rinse with PBS, the stained endothelium is ready for observation under a fluorescence macroscope. Calcein staining optimizations. Calcein-AM staining for endothelial viability on pre-stripped DMEK grafts was suboptimal (Fig. C). To enhance staining, we aimed to increase specific calcein staining on the corneal endothelium while reducing undesirable fluorescence. We conducted experiments using Opti-MEM phenol red-free (11058021, Gibco) instead of PBS as the diluent for Calcein-AM staining, with the goal of enhancing fluorescence intensity. Additionally, we explored a higher concentration of Calcein-AM (4 µM) compared to the standard 2 µM. To mitigate undesirable fluorescence, pre-stripped grafts were immersed in 0.2% TB (0.4% TB reduced to 0.2% by adding an equal volume of BSS) on the endothelial side. This step was performed after the viability testing. Gentle agitation was necessary to facilitate TB entry between the detached DM and the stroma, resulting in staining of the stromal side of the DM. After a 20-second incubation, TB was removed, and the graft was immersed in Opti-MEM or BSS to eliminate excess TB. Observation. A fluorescence macroscope (Macro Zoom Fluorescence Microscope System, MVX10, Olympus, Tokyo, Japan), equipped with CellSens imaging systems software (Olympus, Hamburg, Germany) and a DP74 color and monochrome camera was utilized, featuring three fluorescence filter sets: DAPI: Ex/325–375 nm, DM/400 nm, Em/435–485 nm (for Hoechst Staining). FITC: Ex/450–490 nm, DM/495 nm, Em/500–550 nm (for calcein staining). CY3: Ex/520–570 nm, DM/565 nm, Em/570–640 nm (for ethidium staining). For viability measurement on pre-stripped DMEK grafts, a Z-stack of 6 or 7 images, separated by a Z-interval of approximately 600 µM, was acquired across the entire endothelial surface using a 1× objective (MV PLAPo, Olympus) and 0.8× zoom. The EDF plugin of ImageJ reconstructed a single, entirely focused composite image, enhancing accuracy in detecting calcein-positive areas, considering the cornea’s natural curvature (Fig. B). Quantitative assessment of the impact of TB on endothelial viability. Endothelial viability, assessed through calcein images using the CorneaJ plugin in ImageJ, underwent semi-automatic analysis, requiring manual adjustments for accuracy, particularly in cases of suboptimal calcein staining, as previously described . To quantitatively assess the impact of adding TB staining, we utilized 10 pre-stripped DMEK grafts. Two photos were captured for each graft: one before and another after TB staining. All 20 photos were analyzed by six different operators. Operators, unaware of TB treatment, recorded viability and counting time for each photo. Two criteria were employed for comparison before and after TB. Inter-operator variability It is represented by the standard deviation (SD) of the six endothelial viabilities counted by six different operators for each photo. The goal was to assess whether TB staining could reduce the inter-operator variabilities/SD. Counting time It is the time required to analyze the viability of a photo. An enhancement in the viability/calcein image should lead to a reduction in counting time. Protocol for pre-stripped DMEK grafts. The IF of flat-mounted corneal endothelium was previously developed and optimized , . After viability assay, the pre-stripped DMEK graft underwent gentle washing in BSS. The DMEK graft, being peeled away from the central attachment but still resting on the cornea, was immersed in 25 ml pure methanol fixative for 30 min at RT. After rehydration in PBS, the DM was separated from the cornea and then incubated in blocking buffer (2% bovine serum albumin (BSA) and 2% goat serum in PBS), at 37 °C for 30 min. A double IF involved incubating the DM in a primary antibody solution [NCAM (Mouse IgG, MAB24081, R&D system) and ZO-1 (rabbit IgG, 40–2200, Invitrogen)] for 1 h at 37 °C. After rinses in PBS, secondary antibody [Alexa Fluor 488 goat anti-mouse IgG (A-11001, Invitrogen) and Alexa Fluor 555 goat anti-rabbit IgG (A-21429, Invitrogen)] incubation followed for 1 h at 37 °C, supplemented with 5 µg/ml DAPI for nuclear counterstaining. The DM could be cut into parts if necessary, with adjusted antibody solution volumes to avoid waste. After further rinses in PBS, the DM was delicately spread on a drop of fluorescent mounting medium (NB-23–00158, NeoBio Mount Fluo, NeoBiotech) deposited on a glass slide. For an entire DM, cutting was necessary to prevent folds, and a glass coverslip was placed over it. It was crucial to avoid microbubbles during this process. Optimizations. To eliminate residual calcein fluorescence within CECs, the methanol treated (fixed and permeabilized) endothelium/DM, was immersed in 25 ml PBS overnight at 4 °C. The ethidium staining is visible using the CY3 filter. Therefore, to immunostain proteins at CY3 wavelengths, it is important to avoid using the ethidium stain. Observation. For IF imaging, an Olympus IX81 fluorescence inverted microscope (Olympus, Tokyo, Japan), equipped with CellSens imaging systems software and a monochrome camera (ORCA-Flash 4.0, Hamamatsu), was employed. Three fluorescence filter sets mirrored those used in the microscope: DAPI: Ex/325–375 nm, DM/400 nm, Em/450–490 nm (for DAPI or Hoechst staining). FITC: Ex/460–495 nm, DM/505 nm, Em/510–550 nm (for secondary antibodies combined with Alexa 488 or calcein staining). Ex/520–560 nm, DM/565 nm, Em/572.5–647.5 nm (for secondary antibodies combined with Alexa 555 or ethidium). Precautionary notes. When performing the IF after the endothelial viability assay on the same graft, it is crucial to take certain precautions during the acquisition of the viability assay photos to prevent any alterations to CEC. Ensure that the endothelium remains moist at all times, and keep the observation and photo-acquisition times within reasonable limits to avoid potential cell phototoxicity. Additionally, refrain from maximizing the intensity of the fluorescence used, whenever possible. GraphPad Prism used for statistical analysis and graph construction; specific tests mentioned in respective figure legends. Below is the link to the electronic supplementary material. Supplementary Material 1
Metaboloepigenetics: Role in the Regulation of Flow-Mediated Endothelial (Dys)Function and Atherosclerosis
eeebc5ef-6413-4259-849a-d859fcc4822c
11898952
Cardiovascular System[mh]
Atherosclerosis is a cardiovascular disease characterized by chronic inflammation and thickened arterial walls due to lipoprotein-rich plaques, which form on the vascular endothelium, causing the narrowing of arterial lumen and the restriction of blood flow . Resulting cardiovascular complications, such as ischaemic heart disease and strokes, are the leading causes of death worldwide . Much of the pathophysiology of atherosclerosis is well defined, but, despite increased efforts in disease prevention, the global prevalence of cardiovascular diseases continues to grow . Given its direct and constant contact with blood flow, the endothelium is regulated by haemodynamic forces induced by fluid shear stress, which is defined as the frictional force per unit area from flowing blood. Shear stress acts on mechanical sensing receptors (known as mechanosensors or mechanoreceptors) on endothelial cells (ECs), which transduce these mechanical cues into biochemical signals, a process called mechanotransduction, regulating EC physiology in both homeostatic and diseased states . Examples of mechanosensors include membrane structures (cadherin, integrin, PECAM-1 and PlexinD1), receptors (G protein-coupled) and ion channels (Piezo1) (reviewed in ). Signalling pathways and critical nodes that are induced by shear stress and mechanotransduction include mitogen-activated protein kinases (MAPKs), phosphatidylinositol 3-kinase (PI3K)-AKT, mammalian target of rapamycin (mTOR), RHO GTPases and Yes-associated protein (YAP) and transcriptional coactivator with PDZ-binding motif (TAZ). Several of these pathways lead to the activation of shear stress-responsive transcription factors with well-known functions in EC biology, such as Krüppel-like factor 2 (KLF2) and KLF4 and nuclear factor erythroid 2-like (NRF2), in addition to hypoxia-inducible factor 1-alpha (HIF-1α) and nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) . The importance of the endothelium in the etiology of atherosclerosis cannot be disputed, and indeed, endothelial dysfunction is widely accepted as the initiating factor in the development of atherosclerosis (reviewed in ). This is underscored by the fact that early atherosclerotic lesions preferentially develop at branching or curved vascular regions, such as the carotid bifurcation in the lateral wall of the internal carotid artery, the proximal portion of the left anterior descending coronary artery and the lesser curvature of the aortic arch . At these sites, disturbed flow predominates. This is detected by ECs, which subsequently elicit cellular responses that culminate in the induction of a pro-inflammatory phenotype and the initiation of atherosclerosis . The dysregulation of the epigenome has been increasingly associated with the initiation and progression of atherosclerosis, which has been reclassified as an epigenetic disease (reviewed in [ , , , ]). Indeed, shear stress has been shown to regulate the epigenomic and transcriptomic signatures, as well as the metabolic profile, of ECs [ , , ]. Epigenetic modifications are determined by both the abundance of epigenetic-modifying enzymes and the availability of substrates, cofactors and donors that ultimately regulate the rate of these reactions. Thus, owing to their intrinsic kinetic and thermodynamic parameters, epigenetic-modifying enzymes are highly sensitive to fluctuations in the availability of certain metabolites caused by alterations in metabolic pathways, which consequently act to modulate the levels of specific epigenetic modifications . Metaboloepigenetics, defined as the link between metabolism and the epigenetic control of gene expression, has been demonstrated to be crucial in cancer biology, immune cell activation and cell fate determination, as fluctuations in metabolites influence epigenetic alterations that contribute to diseased states and differentiation programmes [ , , ]. Importantly, the endothelium has emerged as a metabolically dynamic organ that potentially regulates the epigenetic landscape in normal and dysfunctional ECs to effect important roles in vascular homeostasis and disease . Here, we address the evidence that changes in shear stress link to the activation of specific mechanosensory pathways in ECs, which are associated with epigenetic regulation in both homeostasis and disease. We will focus on DNA methylation and histone post-translational modifications (PTMs). The metabolic reprogramming of dysfunctional ECs will also be discussed with regard to its impact on the availability of metabolites that can be used in epigenome-modifying reactions. To the best of our knowledge, a direct connection between epigenetic alterations and changes in metabolism in ECs has yet to be determined. Therefore, we speculate on the possible role of epigenetic regulation through metabolism in disturbed flow-induced endothelial dysfunction and atherosclerosis. The vascular endothelium is composed of a monolayer of ECs which lines the innermost layer of all blood vessels. ECs constitute the first barrier between blood and the rest of the vascular wall, regulating vascular permeability as a selective barrier that controls the passage of fluids, solutes and immune cells between blood and tissue . In vivo, ECs are constantly exposed to blood flow, which regulates nearly every aspect of endothelial function. Stable blood flow, characterized by high-magnitude, unidirectional laminar shear stress, is observed in straight, non-branching regions of the vasculature and is associated with an atheroprotective phenotype. By contrast, branch points, curves and bifurcations are considered atheroprone regions. At these sites, rather than exerting unidirectional and pulsatile shear stress, laminar flow is disturbed and can be oscillatory and turbulent, thus exerting lower shear stress . The effects of stable and disturbed flow have been studied in both in vivo and in vitro settings. Partial carotid ligation (PCL) has been extensively used in atheroprone mouse models (ApoE −/− and Ldlr −/− ) to induce and assess the effect of disturbed fluid shear stress in the development of atherosclerosis . In this in vivo model, three caudal branches of the left common carotid artery (LCA) are surgically ligated without manipulating the LCA itself, therefore inducing disturbed flow in the LCA with characteristic low-magnitude oscillatory shear stress patterns. Importantly, the contralateral right carotid artery (RCA) continues to be exposed to stable flow . In vitro flow models include the exposure of ECs to different shear stress conditions, either on a parallel-plate flow chamber or a microfluidic channel , thus mimicking the flow that ECs are exposed to in the vasculature. KLF2 is an endothelial-expressed transcription factor that plays important roles in vascular homeostasis and is downregulated in ECs in dysfunctional states . Stable flow has been shown to increase the transcription of KLF2 via the MEKK2/3–MEK5–ERK5 kinase cascade, which acts to downregulate the expression of pro-atherogenic genes . KFL4 is another transcription factor that is essential for EC lineage . Laminar (stable) shear stress leads to the upregulation of several atheroprotective genes via KLF4, such as NOS3 , THBD and ITPR3 and has also been shown to regulate the integrity of adherens junctions by the phosphorylation and degradation of vascular endothelial (VE)-cadherin . Laminar shear stress also promotes tight junction stability by regulating the expression of occludin and its attachment to the actin cytoskeleton [ , , ]. Furthermore, atheroprotective shear stress has been shown to inhibit YAP/TAZ activity . Functionally, laminar flow promotes the interaction between integrin β3 and Gα13, resulting in the inhibition of RhoA and phosphorylation (and subsequent inactivation) of YAP. Inactivation of YAP further supresses c-Jun NH2-terminal kinase (JNK) signalling, leading to reduced inflammation, thus promoting EC homeostasis . One of the most important functions of the vascular endothelium is the production of nitric oxide (NO) from L-arginine by endothelial NO synthase (eNOS), which is encoded by the NOS3 gene ( ). NO acts on vascular smooth muscle cells (VSMCs) in the tunica media, thus regulating vascular tone by promoting smooth muscle relaxation and vasodilation . Mechanistically, it has been found that stable flow induces eNOS activation via the activation of the PI3K-AKT pathway through a platelet endothelial cell adhesion molecule (PECAM)-1, VE-cadherin and vascular endothelial growth factor receptor (VEGFR)2/3 complex ( ). NO also has anti-inflammatory effects, with potential roles in inhibiting NF-κB, decreasing the expression of monocyte chemoattractant protein-1 (MCP-1) and vascular adhesion molecule-1 (VCAM-1), thereby limiting leukocyte adhesion and safeguarding endothelial homeostasis ( ) . Changes in haemodynamic shear stress are a major cause for endothelium dysfunction and atherosclerotic lesions, which, as stated above, develop preferentially at vascular branches and curvatures, where disturbed flow predominates . Disturbed flow has been shown to cause eNOS uncoupling and decrease the bioavailability of NO ( ). Additionally, these atheroprone regions are also characterized by the upregulation of ICAM-1 and VCAM-1, leading to enhanced leukocyte adhesion and subsequent extravasation. Disturbed flow also leads to the internalization of VE-cadherin, increasing vascular permeability, and to the secretion of chemokines and cytokines, further enhancing inflammation ( ) . Of note, low shear stress has been shown to upregulate CXC chemokine-related transcripts, such as CXCL12 , in a heparan sulfate proteoglycan-integrin β3-FAK-dependent manner. The activation of this signalling axis was shown to lead to the phosphorylation and activation of MAPKs p38β and p38δ, resulting in the activation of NF-Κb . Integrin α5 has also been shown to be activated upon disturbed flow via the translocation of membrane lipid rafts. In Ldlr −/− mice, the activation of integrin α5 was localized exclusively to atheroprone regions of the vasculature, which were associated with a pro-inflammatory phenotype . The boosted EC inflammatory phenotype leads to enhanced leukocyte recruitment and adhesion and increased paracellular permeability. LDL may accumulate in areas where paracellular permeability has been compromised, known as paracellular pores, or may even be transported across ECs at atheroprone sites . Importantly, although endothelial dysfunction and atherosclerosis develop preferentially at curved and branched regions, which are ubiquitous to every individual, the presence of other risk factors, such as hyperglycaemia, hypercholesterolaemia, obesity and smoking, potentiate endothelial dysfunction at these disturbed flow-exposed sites, thus increasing the risk of atherosclerosis . Increasing evidence suggests that epigenetic alterations are important factors in the development of endothelial dysfunction and atherosclerosis . Epigenetics encompasses the study of mechanisms which regulate gene expression, independent of the primary genetic sequence, by modulating chromatin structure and accessibility of cis -regulatory regions to the transcriptional machinery. Critical epigenetic mechanisms commonly involve the enzymatic addition and removal of covalent modifications to both DNA and histones of metabolite-derived groups . Methylation is the main epigenetic modification on genomic DNA, and it typically occurs on cytosine (C)-rich motifs, known as CpG islands, which are usually located at or near gene promoters, and is typically associated with the repression of gene transcription . Histones are subject to numerous PTMs involving metabolites, including acetylation, methylation, lactylation, phosphorylation and oxidation . However, acetylation and methylation are the two most abundant and well-characterized PTMs. Respectively, histone-modifying enzymes utilize acetyl-CoA, S-adenosylmethionine (SAM), lactate, ATP and reactive oxygen species (ROS) as substrates (reviewed in [ , , ]). These modifications act in specific and diverse ways to either increase or decrease the affinity of the histones to DNA (and hence affect the compactness of the chromatin) and can thus act as activators or repressors of gene expression . The covalent modifications are catalyzed by enzymes, termed epigenetic writers and erasers, which add or remove epigenetic marks using metabolites as substrates, donors or cofactors . DNA methylation reactions are carried out by DNA methyltransferases (DNMTs) using SAM as the methyl donor. DNMT1 is responsible for “copying” the original methylation signature of DNA during replication in which the newly synthesized strand (lacking methylation) is methylated, allowing the conservation of the DNA methylation profile in dividing cells , while DNMT3A and DNMT3B are de novo methyltransferases that introduce new methylation patterns that can be inherited by daughter cells . Similarly, histone methylation reactions are carried out by histone methyltransferases (HMTs), which transfer methyl groups from SAM to arginine and lysine residues on histones H3 and H4 . The methylation of arginine residues is associated with active transcription, while the methylation of lysine residues is linked to both active and repressed transcription depending on the residue that is methylated and the number of methyl groups that are added . For instance, trimethylation on lysine (K) 4 of histone H3 (H3K4me3) is associated with active transcription, while trimethylation on K9 or K27 of histone H3 (H3K9me3 and H3K27me3, respectively) are repressive marks . Thus, in both DNA and histone methylation reactions, the methyl group is derived from SAM, which is synthesized from methionine and ATP and serves as the universal methyl donor in numerous biological methylation processes . Methionine uptake and one-carbon metabolism, which involves the inter-related folate and methionine cycles, therefore play an important role in methylation reactions. Upon the transfer of its methyl group, SAM is converted into S-adenosylhomocysteine (SAH), which can inhibit DNMTs and HMTs if accumulated . Glycine, serine and threonine provide one-carbon units to tetrahydrofolate to transfer to homocysteine and facilitate the generation of SAM ( ) . The removal of DNA methylation can occur passively (at DNA replication or in the absence of DNMT-mediated “maintenance” of methylation) or by active mechanisms which occur independently of cell division. Notably, the ten-eleven translocation (TET) family of proteins (TET1-3) act as such epigenetic erasers by catalyzing the stepwise oxidation of 5-methylcysteine (5mC) to 5-hydroxymethylcytosine (5hmC), 5-formylcytosine (5fC) and 5-carboxycytosine (5aC) . Unmethylated cytosine is restored by the excision of 5fC or 5caC by thymine-DNA glycosylase and base excision repair mechanisms . TET enzymes are members of the 2-oxoglutarate-dependent dioxygenase (2-OGDD) superfamily, which require both molecular oxygen (O2) and α-ketoglutarate (α-KG) as substrates . α-KG is a key intermediate in cellular metabolism, primarily produced in the TCA cycle within the mitochondria, through the conversion of isocitrate by isocitrate dehydrogenase (IDH) or by the anaplerotic glutaminolysis pathway . Furthermore, due to their structural similarity to α-KG, the metabolites immediately downstream of α-KG in the TCA cycle (succinate and fumarate) and 2-hydroxyglutarate (2-HG), which can be generated by dysregulated/mutant IDH, can act as inhibitors of TET demethylases . Thus, both the abundance of α-KG and the balance between α-KG and its antagonists may impact the activities of TET enzymes. Hence, the rates of DNA demethylation may, like those of methylation, be impacted by cellular metabolism . Similarly, the demethylation of histones can be carried out by Jumonji-domain-containing histone demethylases (JHDMs) which, like TETs, are members of the 2-OGDD superfamily and require O2 and α-KG While histone methylation is associated with both active and repressive marks, histone acetylation is exclusively associated with permissive gene transcription . Upon the transfer of acetyl groups, by reactions catalyzed by histone acetyltransferases (HATs), histones become less bound to negatively charged DNA, promoting a more open chromatin structure . Histone acetylation is deeply impacted by the availability of acetyl-CoA, which is the main donor of acetyl groups for acetylation reactions and can be generated through several metabolic pathways, including glycolysis, through the conversion of pyruvate into acetyl-CoA by pyruvate dehydrogenase, fatty acid oxidation (FAO) and de novo lipogenesis . Because acetyl-CoA cannot diffuse across membranes, it condenses with oxaloacetate to produce citrate, which is mobilized to the cytoplasm through the malate–citrate antiporter system. In the cytoplasm, citrate is then converted into acetyl-CoA by ATP citrate lyase (ACLY), which plays an important role in maintaining the levels of acetyl-CoA for histone acetylation . Acetyl-CoA can also be generated from acetate by acetyl-CoA synthetase 2 (ACSS2). Both ACLY and ACSS2 are found in the cytoplasm but can also be present in the nucleus, where they are believed to produce high levels of acetyl-CoA, likely near sites of histone acetylation . The removal of histone acetylation marks is catalyzed by histone deacetylases (HDACs). Class I, II and IV HDACs are influenced by the availability of butyrate and β-hydroxybutyrate, produced during FAO and ketogenesis . These ketone bodies are inhibitors of HDACs and are therefore associated with an open chromatin conformation and active gene expression. Sirtuins (SIRTs; class III HDACs) use oxidized nicotinamide adenine dinucleotide (NAD+) as a cofactor, providing a link between levels of NAD+ and SIRT-mediated histone deacetylation . Thus, these covalent epigenetic marks are impacted by both the expression of epigenetic writers and the availability of metabolite co-factors and hence metabolic pathways . 4.1. DNA (De)Methylation Epigenetic modifications, including DNA modifications, are recognized to be highly dynamic and to be impacted by environmental stimuli . In cardiovascular diseases, the association of aberrant DNA methylation with atherosclerosis development is increasingly being studied. At a global level, hypomethylation within atherosclerotic tissue has typically been observed, while locus-specific aberrant hypermethylation on promoter regions of specific genes known to be involved in disease pathology are common hallmarks in atherosclerosis (reviewed in ). Although technically challenging, several studies have also addressed and reported changes to methylation patterns, specifically with the endothelium of atherosclerotic tissue . The effects of both physiological (high-magnitude and unidirectional) and pathological (low-magnitude or disturbed) flow upon the methylome and transcriptome of underlying endothelium has also been investigated in several studies both in vivo and in vitro. Disturbed fluid shear stress has been shown to dysregulate the expression of DNMTs and TET enzymes in ECs, impacting the endothelial methylome . In vivo, the aortic EC methylomes in sites of disturbed (aortic arch) and physiological, unidirectional (descending aorta) flow were compared in pigs. In this study, genome-wide methylated DNA immunoprecipitation sequencing (MeDIP-seq) identified over 5500 differentially methylated regions (DMRs), predominantly in exons and 5’UTRs of annotated genes, of which 60 were linked to cardiovascular disease . Differential methylation in selected genes, including HOXA genes and ATF4 , was further confirmed by methylation-specific PCR. In mice subjected to PCL, an increase in DNMT1 was observed under disturbed flow compared to the artery used as an undisturbed flow control . Furthermore, the application of PCL in ApoE −/− mice led to the discovery that regions where flow was disturbed were significantly more susceptible to the development of atherosclerosis. Strikingly, treatment with the DNMT inhibitor 5-aza-2′-deoxycytidine (5-Aza) reversed these effects. Using reduced representation bisulfite sequencing, the authors demonstrated genome-wide DMRs within the endothelial-enriched genomic DNA in the LCAs compared to the RCAs, highlighting the importance of DNA methylation in endothelial dysfunction . Furthermore, in combination with microarray transcriptomic analyses, 11 “mechanosensitive” genes were identified in this study whose transcriptional expression was regulated by DNA methylation. Many of the 11 genes identified are known to be important regulators of endothelial cell function and homeostasis and/or are functionally associated with atherosclerosis. Several more flow-responsive genes have also been identified in subsequent studies. These are summarized in . In vitro data collected using human umbilical vein ECs (HUVECs) are consistent with these in vivo results. HUVECs subjected to disturbed flow also presented higher levels of DNMT1, which were associated with enhanced monocyte adhesion . The disturbed flow-induced upregulation of DNMT1 has been linked to increased activity of mTOR, thus providing a link between mechanotransduction and the regulation of epigenetic-modifying enzymes. Oscillatory shear stress was found to activate the PI3K-mTOR-p70S6K through integrin β3, resulting in the upregulation of DNMT1. Importantly, disturbed shear stress coupled with the pharmacological inhibition of this signalling pathway resulted in decreased expression levels of DNMT1 and an improvement in endothelial function . Upon exposure of human aortic ECs to disturbed shear stress, DNMT3A was also found to be upregulated at the protein level, resulting in the hypermethylation of the KLF4 gene promoter and leading to the repression of KLF4 target genes, such as NOS3 . Consistent with this, increased methylation levels on the KLF4 promoter in ECs from swine aortic arch were also observed . The hypermethylated state and associated endothelial dysfunction observed in ECs, upon exposure to disturbed flow, may also be attributed to the downregulation of TET enzymes, particularly TET2 . Thus, HUVECs exposed to low shear stress in vitro were characterized by lower expression levels of TET2 compared to HUVECs exposed to physiological shear stress. The downregulation of TET2 was accompanied with a decrease in autophagic flux, which was demonstrated by an increase in p62, decreased levels of LC3 lipidation and decreased protein levels of eNOS . Importantly, the overexpression of TET2 before low shear stress exposure recovered eNOS and ameliorated autophagy. Low shear stress-mediated downregulation of TET2 has also been associated with enhanced endothelial–mesenchymal transition (EndMT) . In HUVECs, the decrease in TET2 was associated with a reduced expression of endothelial marker VE-cadherin and an increase in mesenchymal marker vimentin. The knockdown of TET2 revealed an increase in the protein levels of vimentin, alpha smooth muscle actin (α-SMA) and fibroblast-specific protein 1 (FSP-1), which were associated with a rearrangement of the cytoskeleton and increased cell migration. Despite the importance of these studies on elucidating the role of low shear stress on the regulation of TET2, it is unclear whether the reported effects are dependent on the impairment of hydroxymethylation, leading to DNA hypermethylation. However, decreased levels of TET2 and 5hmC were shown in aortic sinuses and atherosclerotic lesions of ApoE −/− mice, while 5mC increased, when compared to normal vascular tissue . Importantly, the overexpression of TET2 in ApoE −/− mice subjected to PCL inhibited the development of atherosclerotic lesions, even preventing the accumulation of lipids . TET2 overexpression was also shown to decrease the expression of pro-inflammatory molecules involved in endothelial dysfunction, such as ICAM-1, VCAM-1 and MCP-1 . Together, this shows that TET2 potentially protects against atherosclerosis, but its catalytic role in the regulation of DNA methylation in the setting of disturbed flow remains to be confirmed. 4.2. Histone PTMs Several studies have highlighted the importance of histone modifications as regulators of endothelial-specific gene expression. For instance, HAT7 has been implicated in the acetylation of histones H3 and H4, activating the expression of vascular endothelial growth factor receptor and maintaining EC identity . Specifically in ECs, the endothelial-expressed NOS3 promoter exhibits the enrichment of active histone marks, such as H3K9ac, H4K12ac, H3K4me2 and H3K4me3 . Strikingly, in HUVECs, hypoxia leads to the loss of these active histone marks from the NOS3 proximal promoter, highlighting their dynamic regulation . Histone PTMs have also emerged as important regulators of EC-specific gene expression in the setting of altered blood flow. In a seminal study, Illi and colleagues found that HUVECs exposed to shear stress led to the increased acetylation and phosphorylation of histone H3 (H3K14ac and H3S10ph, respectively) . Furthermore, shear stress coupled with treatment with HDAC inhibitor trichostatin A also led to the acetylation of histone H4 . Increased activity of HAT p300 was also observed in HUVECs exposed to laminar shear stress . This is associated with increased acetylation levels of histones H3 and H4 at the shear stress response element (SSRE) site within the NOS3 promoter, resulting in increased chromatin accessibility and transcriptional activity. Importantly, the inhibition of p300 by curcumin prevented the shear stress-induced transcription of NOS3 . In vivo, an increase in H3K9ac was observed in mouse descending aorta, where stable flow predominates . Several reports have suggested that shear stress also regulates the expression and/or activity of HDACs. Immunohistochemical staining of ApoE −/− mouse ECs isolated from arterial branches (where disturbed flow is prevalent) revealed an increase in HDAC3, which was confirmed in HUVECs exposed to low shear stress . Moreover, further in vitro studies verified that disturbed shear stress also leads to the upregulation and nuclear accumulation of HDAC1, 2, 3, 5 and 7 . Importantly, HDAC1 deacetylates the NOS3 promoter, repressing its transcription . Disturbed flow was also shown to promote the association of HDAC1, 2 and 3 to NRF2 protein, leading to its deacetylation and, consequently, the decreased expression of antioxidant genes . SIRTs are also suggested to be regulated by flow. Consistent with this, the expression of SIRT1 was found to be elevated in the thoracic region of the mouse aorta compared to the aortic arch . This was further confirmed in HUVECs exposed to laminar and oscillatory shear stress. In this study, the increase in SIRT1 was shown to be involved in the deacetylation of eNOS protein, enhancing its activity . It is important to point out that some of these studies report the deacetylation of non-histone proteins; however, it is likely that changes in such epigenetic-modifying enzymes would also impact chromatin remodelling. Histone methylation has also been demonstrated to be sensitive to laminar and oscillatory shear stress, with studies suggesting decreased and increased levels of H3K27me3 and H3K9me3, respectively, in HUVECs exposed to low shear stress . Moreover, using a macro-channels model system, the exposure of HUVECs to disturbed flow caused a generalized increase in H3K4me3 and a decrease in H3K27me3 . Treatment with TNF-α yielded similar results, suggesting that the gain and loss of H3K4me3 and H3K27me3, respectively, is linked to a dysfunctional phenotype in ECs . Consistent with this, tissue isolated from the aortic arch of ApoE −/− mice also showed a decrease in H3K27me3. Perhaps surprisingly, however, this was accompanied by an upregulation of EZH2, which is responsible for the deposition of this histone mark . Recently, histone phosphorylation was shown to regulate disturbed flow-induced endothelial inflammation. Mechanistically, disturbed flow activates the integrin α5β1-PKN1 axis, inducing the phosphorylation of activator protein-1 (AP-1) transcription factor subunit JUN and its translocation into the nucleus, where it associates with FOS to promote the expression of pro-inflammatory genes . PKN1 also translocates into the nucleus, where it phosphorylates histone H3.3 at serine 31 (H3.3S31ph), activating the expression of FOS / FOSB and thus enhancing endothelial inflammation . Interestingly, histone phosphorylation has been shown to induce the activity of other histone-modifying enzymes and promote other histone PTMs, such as acetylation and methylation, suggesting that epigenetic alterations are not “compartmentalized” but rather highly collaborative . Epigenetic modifications, including DNA modifications, are recognized to be highly dynamic and to be impacted by environmental stimuli . In cardiovascular diseases, the association of aberrant DNA methylation with atherosclerosis development is increasingly being studied. At a global level, hypomethylation within atherosclerotic tissue has typically been observed, while locus-specific aberrant hypermethylation on promoter regions of specific genes known to be involved in disease pathology are common hallmarks in atherosclerosis (reviewed in ). Although technically challenging, several studies have also addressed and reported changes to methylation patterns, specifically with the endothelium of atherosclerotic tissue . The effects of both physiological (high-magnitude and unidirectional) and pathological (low-magnitude or disturbed) flow upon the methylome and transcriptome of underlying endothelium has also been investigated in several studies both in vivo and in vitro. Disturbed fluid shear stress has been shown to dysregulate the expression of DNMTs and TET enzymes in ECs, impacting the endothelial methylome . In vivo, the aortic EC methylomes in sites of disturbed (aortic arch) and physiological, unidirectional (descending aorta) flow were compared in pigs. In this study, genome-wide methylated DNA immunoprecipitation sequencing (MeDIP-seq) identified over 5500 differentially methylated regions (DMRs), predominantly in exons and 5’UTRs of annotated genes, of which 60 were linked to cardiovascular disease . Differential methylation in selected genes, including HOXA genes and ATF4 , was further confirmed by methylation-specific PCR. In mice subjected to PCL, an increase in DNMT1 was observed under disturbed flow compared to the artery used as an undisturbed flow control . Furthermore, the application of PCL in ApoE −/− mice led to the discovery that regions where flow was disturbed were significantly more susceptible to the development of atherosclerosis. Strikingly, treatment with the DNMT inhibitor 5-aza-2′-deoxycytidine (5-Aza) reversed these effects. Using reduced representation bisulfite sequencing, the authors demonstrated genome-wide DMRs within the endothelial-enriched genomic DNA in the LCAs compared to the RCAs, highlighting the importance of DNA methylation in endothelial dysfunction . Furthermore, in combination with microarray transcriptomic analyses, 11 “mechanosensitive” genes were identified in this study whose transcriptional expression was regulated by DNA methylation. Many of the 11 genes identified are known to be important regulators of endothelial cell function and homeostasis and/or are functionally associated with atherosclerosis. Several more flow-responsive genes have also been identified in subsequent studies. These are summarized in . In vitro data collected using human umbilical vein ECs (HUVECs) are consistent with these in vivo results. HUVECs subjected to disturbed flow also presented higher levels of DNMT1, which were associated with enhanced monocyte adhesion . The disturbed flow-induced upregulation of DNMT1 has been linked to increased activity of mTOR, thus providing a link between mechanotransduction and the regulation of epigenetic-modifying enzymes. Oscillatory shear stress was found to activate the PI3K-mTOR-p70S6K through integrin β3, resulting in the upregulation of DNMT1. Importantly, disturbed shear stress coupled with the pharmacological inhibition of this signalling pathway resulted in decreased expression levels of DNMT1 and an improvement in endothelial function . Upon exposure of human aortic ECs to disturbed shear stress, DNMT3A was also found to be upregulated at the protein level, resulting in the hypermethylation of the KLF4 gene promoter and leading to the repression of KLF4 target genes, such as NOS3 . Consistent with this, increased methylation levels on the KLF4 promoter in ECs from swine aortic arch were also observed . The hypermethylated state and associated endothelial dysfunction observed in ECs, upon exposure to disturbed flow, may also be attributed to the downregulation of TET enzymes, particularly TET2 . Thus, HUVECs exposed to low shear stress in vitro were characterized by lower expression levels of TET2 compared to HUVECs exposed to physiological shear stress. The downregulation of TET2 was accompanied with a decrease in autophagic flux, which was demonstrated by an increase in p62, decreased levels of LC3 lipidation and decreased protein levels of eNOS . Importantly, the overexpression of TET2 before low shear stress exposure recovered eNOS and ameliorated autophagy. Low shear stress-mediated downregulation of TET2 has also been associated with enhanced endothelial–mesenchymal transition (EndMT) . In HUVECs, the decrease in TET2 was associated with a reduced expression of endothelial marker VE-cadherin and an increase in mesenchymal marker vimentin. The knockdown of TET2 revealed an increase in the protein levels of vimentin, alpha smooth muscle actin (α-SMA) and fibroblast-specific protein 1 (FSP-1), which were associated with a rearrangement of the cytoskeleton and increased cell migration. Despite the importance of these studies on elucidating the role of low shear stress on the regulation of TET2, it is unclear whether the reported effects are dependent on the impairment of hydroxymethylation, leading to DNA hypermethylation. However, decreased levels of TET2 and 5hmC were shown in aortic sinuses and atherosclerotic lesions of ApoE −/− mice, while 5mC increased, when compared to normal vascular tissue . Importantly, the overexpression of TET2 in ApoE −/− mice subjected to PCL inhibited the development of atherosclerotic lesions, even preventing the accumulation of lipids . TET2 overexpression was also shown to decrease the expression of pro-inflammatory molecules involved in endothelial dysfunction, such as ICAM-1, VCAM-1 and MCP-1 . Together, this shows that TET2 potentially protects against atherosclerosis, but its catalytic role in the regulation of DNA methylation in the setting of disturbed flow remains to be confirmed. Several studies have highlighted the importance of histone modifications as regulators of endothelial-specific gene expression. For instance, HAT7 has been implicated in the acetylation of histones H3 and H4, activating the expression of vascular endothelial growth factor receptor and maintaining EC identity . Specifically in ECs, the endothelial-expressed NOS3 promoter exhibits the enrichment of active histone marks, such as H3K9ac, H4K12ac, H3K4me2 and H3K4me3 . Strikingly, in HUVECs, hypoxia leads to the loss of these active histone marks from the NOS3 proximal promoter, highlighting their dynamic regulation . Histone PTMs have also emerged as important regulators of EC-specific gene expression in the setting of altered blood flow. In a seminal study, Illi and colleagues found that HUVECs exposed to shear stress led to the increased acetylation and phosphorylation of histone H3 (H3K14ac and H3S10ph, respectively) . Furthermore, shear stress coupled with treatment with HDAC inhibitor trichostatin A also led to the acetylation of histone H4 . Increased activity of HAT p300 was also observed in HUVECs exposed to laminar shear stress . This is associated with increased acetylation levels of histones H3 and H4 at the shear stress response element (SSRE) site within the NOS3 promoter, resulting in increased chromatin accessibility and transcriptional activity. Importantly, the inhibition of p300 by curcumin prevented the shear stress-induced transcription of NOS3 . In vivo, an increase in H3K9ac was observed in mouse descending aorta, where stable flow predominates . Several reports have suggested that shear stress also regulates the expression and/or activity of HDACs. Immunohistochemical staining of ApoE −/− mouse ECs isolated from arterial branches (where disturbed flow is prevalent) revealed an increase in HDAC3, which was confirmed in HUVECs exposed to low shear stress . Moreover, further in vitro studies verified that disturbed shear stress also leads to the upregulation and nuclear accumulation of HDAC1, 2, 3, 5 and 7 . Importantly, HDAC1 deacetylates the NOS3 promoter, repressing its transcription . Disturbed flow was also shown to promote the association of HDAC1, 2 and 3 to NRF2 protein, leading to its deacetylation and, consequently, the decreased expression of antioxidant genes . SIRTs are also suggested to be regulated by flow. Consistent with this, the expression of SIRT1 was found to be elevated in the thoracic region of the mouse aorta compared to the aortic arch . This was further confirmed in HUVECs exposed to laminar and oscillatory shear stress. In this study, the increase in SIRT1 was shown to be involved in the deacetylation of eNOS protein, enhancing its activity . It is important to point out that some of these studies report the deacetylation of non-histone proteins; however, it is likely that changes in such epigenetic-modifying enzymes would also impact chromatin remodelling. Histone methylation has also been demonstrated to be sensitive to laminar and oscillatory shear stress, with studies suggesting decreased and increased levels of H3K27me3 and H3K9me3, respectively, in HUVECs exposed to low shear stress . Moreover, using a macro-channels model system, the exposure of HUVECs to disturbed flow caused a generalized increase in H3K4me3 and a decrease in H3K27me3 . Treatment with TNF-α yielded similar results, suggesting that the gain and loss of H3K4me3 and H3K27me3, respectively, is linked to a dysfunctional phenotype in ECs . Consistent with this, tissue isolated from the aortic arch of ApoE −/− mice also showed a decrease in H3K27me3. Perhaps surprisingly, however, this was accompanied by an upregulation of EZH2, which is responsible for the deposition of this histone mark . Recently, histone phosphorylation was shown to regulate disturbed flow-induced endothelial inflammation. Mechanistically, disturbed flow activates the integrin α5β1-PKN1 axis, inducing the phosphorylation of activator protein-1 (AP-1) transcription factor subunit JUN and its translocation into the nucleus, where it associates with FOS to promote the expression of pro-inflammatory genes . PKN1 also translocates into the nucleus, where it phosphorylates histone H3.3 at serine 31 (H3.3S31ph), activating the expression of FOS / FOSB and thus enhancing endothelial inflammation . Interestingly, histone phosphorylation has been shown to induce the activity of other histone-modifying enzymes and promote other histone PTMs, such as acetylation and methylation, suggesting that epigenetic alterations are not “compartmentalized” but rather highly collaborative . As discussed above, disturbed flow can act to reprogram the epigenome of ECs, in part via the dysregulation of the expression and/or localisation of epigenetic-modifying enzymes. Importantly, however, the rates of the reactions catalyzed by these enzymes are also dependent on the availability of metabolites that act as substrates, cofactors and donors, therefore linking epigenetics to metabolic alterations . It has become well established that metabolism and metabolic fluxes play an important role in endothelial dysfunction (reviewed in ). Most studies focused on EC metabolism have been limited to cultured ECs under static conditions, which do not consider the shear stress that ECs are constantly exposed to in vivo. Nonetheless, a few studies have reported alterations in metabolism and mitochondrial function upon different fluid shear stress conditions . This section summarizes what is currently known about EC metabolism in both homeostatic (exposed to laminar flow) and disturbed flow-induced dysfunctional states and hypothesizes how metabolic changes may impact epigenetic alterations linked to EC (dys)function. 5.1. Shear Stress-Mediated EC Metabolism Flow-mediated EC metabolism appears to differ significantly from that reported under static conditions (reviewed in ). HUVECs exposed to laminar shear stress are characterized by increased mitochondrial metabolism and remodelling of the mitochondrial network to comprise mostly more elongated, tubular mitochondria that arise from fusion events (i.e., when two mitochondria merge) . Mitophagy was also reported to be more active in laminar shear stress-exposed HUVECs . Indeed, the removal of damaged mitochondria by mitophagy creates an environment that favours mitochondrial metabolism . The induction of mitophagy has been shown to be necessary for the endothelial differentiation of iPSCs, where PINK1-mediated mitophagy was shown to precede a boost in mitochondrial biogenesis, with increased expression of peroxisome proliferator gamma coactivator-1α (PGC-1α) . Perhaps significantly, PGC-1α has recently emerged as a flow-responsive gene. Thus, compared with oscillatory shear stress, human aortic ECs exposed to undisturbed flow exhibited an increased expression of PGC-1α , consistent with an increase in mitochondrial biogenesis and function. Complex and mature mitochondrial networks are associated with cells that are more dependent on oxidative phosphorylation (OXPHOS). FAO, which has been shown to be important for EC identity ( ) , generates reducing equivalents that enter OXPHOS , thus supporting mitochondrial metabolism. Consistently, the upregulation of KLF2 by laminar shear stress has been shown to decrease glucose uptake, thus limiting glycolysis . These studies possibly suggest that, unlike what has been observed in cultured ECs in static conditions (reviewed in [ , , ]), mitochondrial metabolism might play an important role in EC bioenergetics and homeostasis. By contrast, enhanced glycolysis has been reported to take place in ECs at regions of disturbed flow ( ) ; however, it is still unclear as to whether this is an adaptive, atheroprotective response or increases the susceptibility to atherosclerosis . Low shear stress leads to the activation of mechanotransducers YAP/TAZ, which promote glycolysis and a pro-inflammatory phenotype . In an independent study, ECs isolated from atheroprone regions of porcine aorta exhibited higher levels of HIF-1α, leading to the activation of several glycolytic genes, including 6-phosphofructo-2-kinase/fructose-2,6-biphosphatase 3 ( PFKFB3 ), hexokinase 2 ( HK2 ), enolase 2 ( ENO2 ) and glucose transporters 1 and 3 (GLUT1/3) . Mechanistically, disturbed flow led to an increase in NADPH oxidase 4-generated ROS, which were shown to be responsible for the increased stabilization of HIF-1α . In human aortic ECs exposed to disturbed flow, an increase in HIF-1α was also observed, which was accompanied with enhanced glycolysis and a decrease in mitochondrial respiratory capacity ( ) . Strikingly, lactate dehydrogenase A ( LDHA ) was found to be upregulated upon disturbed flow, possibly limiting the availability of pyruvate to enter the TCA cycle and reducing OXPHOS . Consistently, human aortic ECs exposed to disturbed flow present defective mitophagy, a very fragmented mitochondrial network and an increase in mitochondrial ROS compared to ECs exposed to stable flow . Fragmented mitochondrial networks, composed of round and unfused mitochondria, are usually present in cells that mainly rely on glycolysis for ATP production, which is consistent with the HIF-1α-driven increase in aerobic glycolysis observed in ECs exposed to low shear stress. Disturbed flow also leads to an increase in enolase1 ( ENO1 ), further potentiating the glycolytic pathway ( ) . The phenotype caused by disturbed flow was shown to be pro-inflammatory, with an increased engagement in TGF-β signalling and EndMT . Accordingly, ECs treated with TGF-β, a major activator of EndMT, were characterized by a decrease in FAO . Recently, EPAS1 (endothelial PAS domain-containing protein 1, also known as HIF2A ) was shown to protect ECs in regions of disturbed flow by upregulating fatty acid transporter cluster of differentiation 36 ( CD36 ) and endothelial lipase ( LIPG ), promoting fatty acid uptake and oxidation, and thus maintaining EC proliferation. The authors further demonstrated that obesity causes a downregulation of EPAS1, leading to increased susceptibility to atherosclerotic lesions at sites of disturbed flow . These studies further suggest that FAO possibly plays an important role in EC identity and homeostasis . 5.2. Linking the EC Epigenome and Metabolism The epigenetic alterations that take place upon disturbed flow-induced endothelial dysfunction might suggest that metabolic reprogramming occurs, perhaps in part, to regulate the availability of relevant metabolites to fuel epigenome-modifying reactions. However, this connection between metabolic regulation and epigenetic alterations has not been fully investigated in the setting of endothelial (dys)function thus far. As stated above, studies have shown that ECs exposed to different types of shear stress (laminar/undisturbed vs. oscillatory/disturbed) are characterized by the preferential use of specific metabolic pathways, and ECs also exhibit distinct, shear stress-dependent mitochondrial networks. Many of the epigenetic-modifying metabolites are produced in the mitochondria, so changes in mitochondrial morphology and function will directly impact the availability of these metabolites [ , , ]. 5.2.1. Acetyl-CoA and Histone Acetylation FAO is a major source of acetyl-CoA that can be used in histone acetylation reactions and has been shown to be important for EC identity, possibly by maintaining the intracellular pool of FAO-derived acetyl-CoA . Moreover, as stated above, loss of EC identity upon the induction of EndMT was reported to be accompanied with a decrease in FAO, which resulted in lower levels of acetyl-CoA . FAO, however, is not the only source of acetyl-CoA for histone acetylation, and fluctuations in the availability of differentially sourced acetyl-CoA are crucial for the regulation of histone acetylation and transcriptional regulation. For example, in iPSCs, the deprivation of exogenous lipids has been shown to cause an exacerbation of de novo lipogenesis via citrate, thus increasing the intracellular levels of lipogenic acetyl-CoA . Mechanistically, it was shown that citrate diverges from the TCA cycle to the cytoplasm, where it is converted into acetyl-CoA by ACLY ( ). Significantly, in the same study, enhancing FAO did not result in the same effect on the levels of acetyl-CoA and histone acetylation . Also, in this setting, an increase in histone acetylation marks was associated with increased HAT7 and decreased SIRT1 and HDAC1 expression levels , suggesting that altered expression of epigenetic-modifying enzymes and metabolic reprogramming may be linked. Interestingly, oscillatory shear stress was shown to upregulate fatty acid synthase ( FASN ), possibly suggesting dysregulated de novo lipogenesis, but further studies are needed. In proliferating muscle stem cells, regeneration is promoted when glucose is not completely oxidized, also causing citrate to be diverged to the cytoplasm and be converted into acetyl-CoA . Interestingly, upon the induction of EndMT with TGF-β, glucose-derived acetate was shown to increase the intracellular pool of acetyl-CoA, in an ACSS2-dependent manner ( ), in human umbilical artery ECs . This further emphasizes that acetyl-CoA derived from different sources may be used for the acetylation of histones in distinct genomic regions. 5.2.2. SAM and DNA/Histone Methylation Animal-derived proteins are known to have atherogenic effects, mainly due to their high methionine content. Indeed, excessive methionine intake contributes to endothelial dysfunction and atherosclerosis . Because methionine is the main precursor to SAM, which donates its methyl group for DNA and histone methylation reactions ( ) , one could argue that the increased utilization of methionine and/or the replenishment of carbon units could fuel the DNA hypermethylation observed in ECs when exposed to disturbed flow. Increased methionine uptake or the synthesis of amino acids that are involved in SAM metabolism have been observed in several types of cancer to alter the intracellular pool of SAM and methylation levels of DNA and histones . Changing the intracellular pool of SAM can impact methylation patterns and thus influence cell fate decisions . Perhaps significantly, one-carbon metabolism has been reported to be downregulated upon the exposure of human ECs to both laminar and oscillatory shear stress compared to static conditions . Furthermore, methionine was shown to be increased in blood plasma samples obtained from ApoE −/− mice subjected to PCL, possibly hinting at altered methionine uptake by ECs ( ). Thus, the effects of flow upon methionine uptake, one-carbon metabolism and methylation need to be further explored. 5.2.3. α-KG and DNA/Histone Demethylation α-KG is essential for the demethylation of DNA and histones, and fluctuations in this metabolite have been shown to influence demethylation levels and, consequently, cell fate . For instance, mouse ESCs have been shown to metabolize both glucose and glutamine to preserve high levels of α-KG and promote histone and DNA demethylation to determine cell fate . Recently, abrogating FAO in mouse cardiomyocytes through the depletion of Cpt1b (the gene that encodes the muscle-specific isoform of FAO rate-limiting enzyme carnitine palmitoyltransferase 1B) demonstrated an increase in the levels of α-KG associated with a reduction in the levels of H3K4me3 in cardiomyocyte-specific genes and enhanced cardiac regeneration . The fragmented mitochondrial network that characterizes ECs exposed to oscillatory shear stress might suggest a dysregulation of the TCA cycle and altered mitochondrial metabolism , possibly impacting the production of α-KG by IDH. However, the pro-inflammatory YAP/TAZ pathway, activated upon endothelial dysfunction, also enhances glutaminolysis, which may also impact the levels of α-KG . 5.2.4. Lactate and Histone Lactylation Disturbed flow-exposed ECs have been shown to rely on aerobic glycolysis. Despite glycolysis also being a potential source of acetyl-CoA , the aerobic glycolysis observed upon disturbed shear stress might rather favour the generation of lactate. In dysfunctional ECs, LDHA is upregulated, promoting the conversion of pyruvate into lactate ( ) . Another important histone PTM, although less explored in EC biology, is lactylation. The roles of histone lactylation have been reviewed elsewhere , and increased levels of this PTM have been linked to an increase in glycolysis . Recently, lactate derived from aerobic glycolysis in ECs was shown to induce histone lactylation on H3K18 (H3K18la) on the SNAI1 promoter, activating its expression and promoting EndMT . The inhibition of glycolysis decreased H3K18la levels on the SNAI1 promoter and attenuated EndMT and atherosclerosis . Flow-mediated EC metabolism appears to differ significantly from that reported under static conditions (reviewed in ). HUVECs exposed to laminar shear stress are characterized by increased mitochondrial metabolism and remodelling of the mitochondrial network to comprise mostly more elongated, tubular mitochondria that arise from fusion events (i.e., when two mitochondria merge) . Mitophagy was also reported to be more active in laminar shear stress-exposed HUVECs . Indeed, the removal of damaged mitochondria by mitophagy creates an environment that favours mitochondrial metabolism . The induction of mitophagy has been shown to be necessary for the endothelial differentiation of iPSCs, where PINK1-mediated mitophagy was shown to precede a boost in mitochondrial biogenesis, with increased expression of peroxisome proliferator gamma coactivator-1α (PGC-1α) . Perhaps significantly, PGC-1α has recently emerged as a flow-responsive gene. Thus, compared with oscillatory shear stress, human aortic ECs exposed to undisturbed flow exhibited an increased expression of PGC-1α , consistent with an increase in mitochondrial biogenesis and function. Complex and mature mitochondrial networks are associated with cells that are more dependent on oxidative phosphorylation (OXPHOS). FAO, which has been shown to be important for EC identity ( ) , generates reducing equivalents that enter OXPHOS , thus supporting mitochondrial metabolism. Consistently, the upregulation of KLF2 by laminar shear stress has been shown to decrease glucose uptake, thus limiting glycolysis . These studies possibly suggest that, unlike what has been observed in cultured ECs in static conditions (reviewed in [ , , ]), mitochondrial metabolism might play an important role in EC bioenergetics and homeostasis. By contrast, enhanced glycolysis has been reported to take place in ECs at regions of disturbed flow ( ) ; however, it is still unclear as to whether this is an adaptive, atheroprotective response or increases the susceptibility to atherosclerosis . Low shear stress leads to the activation of mechanotransducers YAP/TAZ, which promote glycolysis and a pro-inflammatory phenotype . In an independent study, ECs isolated from atheroprone regions of porcine aorta exhibited higher levels of HIF-1α, leading to the activation of several glycolytic genes, including 6-phosphofructo-2-kinase/fructose-2,6-biphosphatase 3 ( PFKFB3 ), hexokinase 2 ( HK2 ), enolase 2 ( ENO2 ) and glucose transporters 1 and 3 (GLUT1/3) . Mechanistically, disturbed flow led to an increase in NADPH oxidase 4-generated ROS, which were shown to be responsible for the increased stabilization of HIF-1α . In human aortic ECs exposed to disturbed flow, an increase in HIF-1α was also observed, which was accompanied with enhanced glycolysis and a decrease in mitochondrial respiratory capacity ( ) . Strikingly, lactate dehydrogenase A ( LDHA ) was found to be upregulated upon disturbed flow, possibly limiting the availability of pyruvate to enter the TCA cycle and reducing OXPHOS . Consistently, human aortic ECs exposed to disturbed flow present defective mitophagy, a very fragmented mitochondrial network and an increase in mitochondrial ROS compared to ECs exposed to stable flow . Fragmented mitochondrial networks, composed of round and unfused mitochondria, are usually present in cells that mainly rely on glycolysis for ATP production, which is consistent with the HIF-1α-driven increase in aerobic glycolysis observed in ECs exposed to low shear stress. Disturbed flow also leads to an increase in enolase1 ( ENO1 ), further potentiating the glycolytic pathway ( ) . The phenotype caused by disturbed flow was shown to be pro-inflammatory, with an increased engagement in TGF-β signalling and EndMT . Accordingly, ECs treated with TGF-β, a major activator of EndMT, were characterized by a decrease in FAO . Recently, EPAS1 (endothelial PAS domain-containing protein 1, also known as HIF2A ) was shown to protect ECs in regions of disturbed flow by upregulating fatty acid transporter cluster of differentiation 36 ( CD36 ) and endothelial lipase ( LIPG ), promoting fatty acid uptake and oxidation, and thus maintaining EC proliferation. The authors further demonstrated that obesity causes a downregulation of EPAS1, leading to increased susceptibility to atherosclerotic lesions at sites of disturbed flow . These studies further suggest that FAO possibly plays an important role in EC identity and homeostasis . The epigenetic alterations that take place upon disturbed flow-induced endothelial dysfunction might suggest that metabolic reprogramming occurs, perhaps in part, to regulate the availability of relevant metabolites to fuel epigenome-modifying reactions. However, this connection between metabolic regulation and epigenetic alterations has not been fully investigated in the setting of endothelial (dys)function thus far. As stated above, studies have shown that ECs exposed to different types of shear stress (laminar/undisturbed vs. oscillatory/disturbed) are characterized by the preferential use of specific metabolic pathways, and ECs also exhibit distinct, shear stress-dependent mitochondrial networks. Many of the epigenetic-modifying metabolites are produced in the mitochondria, so changes in mitochondrial morphology and function will directly impact the availability of these metabolites [ , , ]. 5.2.1. Acetyl-CoA and Histone Acetylation FAO is a major source of acetyl-CoA that can be used in histone acetylation reactions and has been shown to be important for EC identity, possibly by maintaining the intracellular pool of FAO-derived acetyl-CoA . Moreover, as stated above, loss of EC identity upon the induction of EndMT was reported to be accompanied with a decrease in FAO, which resulted in lower levels of acetyl-CoA . FAO, however, is not the only source of acetyl-CoA for histone acetylation, and fluctuations in the availability of differentially sourced acetyl-CoA are crucial for the regulation of histone acetylation and transcriptional regulation. For example, in iPSCs, the deprivation of exogenous lipids has been shown to cause an exacerbation of de novo lipogenesis via citrate, thus increasing the intracellular levels of lipogenic acetyl-CoA . Mechanistically, it was shown that citrate diverges from the TCA cycle to the cytoplasm, where it is converted into acetyl-CoA by ACLY ( ). Significantly, in the same study, enhancing FAO did not result in the same effect on the levels of acetyl-CoA and histone acetylation . Also, in this setting, an increase in histone acetylation marks was associated with increased HAT7 and decreased SIRT1 and HDAC1 expression levels , suggesting that altered expression of epigenetic-modifying enzymes and metabolic reprogramming may be linked. Interestingly, oscillatory shear stress was shown to upregulate fatty acid synthase ( FASN ), possibly suggesting dysregulated de novo lipogenesis, but further studies are needed. In proliferating muscle stem cells, regeneration is promoted when glucose is not completely oxidized, also causing citrate to be diverged to the cytoplasm and be converted into acetyl-CoA . Interestingly, upon the induction of EndMT with TGF-β, glucose-derived acetate was shown to increase the intracellular pool of acetyl-CoA, in an ACSS2-dependent manner ( ), in human umbilical artery ECs . This further emphasizes that acetyl-CoA derived from different sources may be used for the acetylation of histones in distinct genomic regions. 5.2.2. SAM and DNA/Histone Methylation Animal-derived proteins are known to have atherogenic effects, mainly due to their high methionine content. Indeed, excessive methionine intake contributes to endothelial dysfunction and atherosclerosis . Because methionine is the main precursor to SAM, which donates its methyl group for DNA and histone methylation reactions ( ) , one could argue that the increased utilization of methionine and/or the replenishment of carbon units could fuel the DNA hypermethylation observed in ECs when exposed to disturbed flow. Increased methionine uptake or the synthesis of amino acids that are involved in SAM metabolism have been observed in several types of cancer to alter the intracellular pool of SAM and methylation levels of DNA and histones . Changing the intracellular pool of SAM can impact methylation patterns and thus influence cell fate decisions . Perhaps significantly, one-carbon metabolism has been reported to be downregulated upon the exposure of human ECs to both laminar and oscillatory shear stress compared to static conditions . Furthermore, methionine was shown to be increased in blood plasma samples obtained from ApoE −/− mice subjected to PCL, possibly hinting at altered methionine uptake by ECs ( ). Thus, the effects of flow upon methionine uptake, one-carbon metabolism and methylation need to be further explored. 5.2.3. α-KG and DNA/Histone Demethylation α-KG is essential for the demethylation of DNA and histones, and fluctuations in this metabolite have been shown to influence demethylation levels and, consequently, cell fate . For instance, mouse ESCs have been shown to metabolize both glucose and glutamine to preserve high levels of α-KG and promote histone and DNA demethylation to determine cell fate . Recently, abrogating FAO in mouse cardiomyocytes through the depletion of Cpt1b (the gene that encodes the muscle-specific isoform of FAO rate-limiting enzyme carnitine palmitoyltransferase 1B) demonstrated an increase in the levels of α-KG associated with a reduction in the levels of H3K4me3 in cardiomyocyte-specific genes and enhanced cardiac regeneration . The fragmented mitochondrial network that characterizes ECs exposed to oscillatory shear stress might suggest a dysregulation of the TCA cycle and altered mitochondrial metabolism , possibly impacting the production of α-KG by IDH. However, the pro-inflammatory YAP/TAZ pathway, activated upon endothelial dysfunction, also enhances glutaminolysis, which may also impact the levels of α-KG . 5.2.4. Lactate and Histone Lactylation Disturbed flow-exposed ECs have been shown to rely on aerobic glycolysis. Despite glycolysis also being a potential source of acetyl-CoA , the aerobic glycolysis observed upon disturbed shear stress might rather favour the generation of lactate. In dysfunctional ECs, LDHA is upregulated, promoting the conversion of pyruvate into lactate ( ) . Another important histone PTM, although less explored in EC biology, is lactylation. The roles of histone lactylation have been reviewed elsewhere , and increased levels of this PTM have been linked to an increase in glycolysis . Recently, lactate derived from aerobic glycolysis in ECs was shown to induce histone lactylation on H3K18 (H3K18la) on the SNAI1 promoter, activating its expression and promoting EndMT . The inhibition of glycolysis decreased H3K18la levels on the SNAI1 promoter and attenuated EndMT and atherosclerosis . FAO is a major source of acetyl-CoA that can be used in histone acetylation reactions and has been shown to be important for EC identity, possibly by maintaining the intracellular pool of FAO-derived acetyl-CoA . Moreover, as stated above, loss of EC identity upon the induction of EndMT was reported to be accompanied with a decrease in FAO, which resulted in lower levels of acetyl-CoA . FAO, however, is not the only source of acetyl-CoA for histone acetylation, and fluctuations in the availability of differentially sourced acetyl-CoA are crucial for the regulation of histone acetylation and transcriptional regulation. For example, in iPSCs, the deprivation of exogenous lipids has been shown to cause an exacerbation of de novo lipogenesis via citrate, thus increasing the intracellular levels of lipogenic acetyl-CoA . Mechanistically, it was shown that citrate diverges from the TCA cycle to the cytoplasm, where it is converted into acetyl-CoA by ACLY ( ). Significantly, in the same study, enhancing FAO did not result in the same effect on the levels of acetyl-CoA and histone acetylation . Also, in this setting, an increase in histone acetylation marks was associated with increased HAT7 and decreased SIRT1 and HDAC1 expression levels , suggesting that altered expression of epigenetic-modifying enzymes and metabolic reprogramming may be linked. Interestingly, oscillatory shear stress was shown to upregulate fatty acid synthase ( FASN ), possibly suggesting dysregulated de novo lipogenesis, but further studies are needed. In proliferating muscle stem cells, regeneration is promoted when glucose is not completely oxidized, also causing citrate to be diverged to the cytoplasm and be converted into acetyl-CoA . Interestingly, upon the induction of EndMT with TGF-β, glucose-derived acetate was shown to increase the intracellular pool of acetyl-CoA, in an ACSS2-dependent manner ( ), in human umbilical artery ECs . This further emphasizes that acetyl-CoA derived from different sources may be used for the acetylation of histones in distinct genomic regions. Animal-derived proteins are known to have atherogenic effects, mainly due to their high methionine content. Indeed, excessive methionine intake contributes to endothelial dysfunction and atherosclerosis . Because methionine is the main precursor to SAM, which donates its methyl group for DNA and histone methylation reactions ( ) , one could argue that the increased utilization of methionine and/or the replenishment of carbon units could fuel the DNA hypermethylation observed in ECs when exposed to disturbed flow. Increased methionine uptake or the synthesis of amino acids that are involved in SAM metabolism have been observed in several types of cancer to alter the intracellular pool of SAM and methylation levels of DNA and histones . Changing the intracellular pool of SAM can impact methylation patterns and thus influence cell fate decisions . Perhaps significantly, one-carbon metabolism has been reported to be downregulated upon the exposure of human ECs to both laminar and oscillatory shear stress compared to static conditions . Furthermore, methionine was shown to be increased in blood plasma samples obtained from ApoE −/− mice subjected to PCL, possibly hinting at altered methionine uptake by ECs ( ). Thus, the effects of flow upon methionine uptake, one-carbon metabolism and methylation need to be further explored. α-KG is essential for the demethylation of DNA and histones, and fluctuations in this metabolite have been shown to influence demethylation levels and, consequently, cell fate . For instance, mouse ESCs have been shown to metabolize both glucose and glutamine to preserve high levels of α-KG and promote histone and DNA demethylation to determine cell fate . Recently, abrogating FAO in mouse cardiomyocytes through the depletion of Cpt1b (the gene that encodes the muscle-specific isoform of FAO rate-limiting enzyme carnitine palmitoyltransferase 1B) demonstrated an increase in the levels of α-KG associated with a reduction in the levels of H3K4me3 in cardiomyocyte-specific genes and enhanced cardiac regeneration . The fragmented mitochondrial network that characterizes ECs exposed to oscillatory shear stress might suggest a dysregulation of the TCA cycle and altered mitochondrial metabolism , possibly impacting the production of α-KG by IDH. However, the pro-inflammatory YAP/TAZ pathway, activated upon endothelial dysfunction, also enhances glutaminolysis, which may also impact the levels of α-KG . Disturbed flow-exposed ECs have been shown to rely on aerobic glycolysis. Despite glycolysis also being a potential source of acetyl-CoA , the aerobic glycolysis observed upon disturbed shear stress might rather favour the generation of lactate. In dysfunctional ECs, LDHA is upregulated, promoting the conversion of pyruvate into lactate ( ) . Another important histone PTM, although less explored in EC biology, is lactylation. The roles of histone lactylation have been reviewed elsewhere , and increased levels of this PTM have been linked to an increase in glycolysis . Recently, lactate derived from aerobic glycolysis in ECs was shown to induce histone lactylation on H3K18 (H3K18la) on the SNAI1 promoter, activating its expression and promoting EndMT . The inhibition of glycolysis decreased H3K18la levels on the SNAI1 promoter and attenuated EndMT and atherosclerosis . The observation that atherosclerosis develops preferentially at sites of disturbed flow highlights the critical role if mechanosensory pathways in the endothelium in the etiology of the disease. Understanding the molecular mechanisms which underlie the reprogramming and remodelling of the vasculature in response to mechanosensory cues in ECs is crucial to inform the development of better therapeutic or preventive strategies to target cardiovascular diseases. Current research methods, particularly those involving dynamic changes in epigenetic marks, have limitations. However, the continual development of more sensitive and specific assay systems may further promote future progress. Epigenetic changes within ECs are increasingly demonstrated to be causally related to disease development and progression. Here, we explore how metabolic changes, induced by mechanosensory pathways, might act to change the epigenetic landscape through alterations in the availability in metabolites. The levels of acetyl-CoA, α-KG and SAM are all known to regulate chromatin remodelling. EC identity seems to be compromised when FAO is downregulated. However, to the best of our knowledge, whether the decrease in FAO is causal in the loss of histone acetylation at EC-relevant loci is unknown, thus warranting future studies. Acetyl-CoA derived from different metabolic pathways does not seem to exert the same effect on histone acetylation and cell identity. For instance, FAO- and lipogenic-derived acetyl-CoA do not have the same impact on maintaining histone acetylation in human iPSCs. Interestingly, the increase in FASN in oscillatory shear stress-exposed ECs may suggest an increase in de novo lipogenesis, but this also needs to be confirmed. Furthermore, the hypermethylation of EC-relevant gene promoters was found to take place upon exposure to disturbed flow, suggesting that DNA methylation reactions may be fueled by an increased uptake of methionine and subsequent conversion to SAM, but further studies are needed to assess this. Glucose-derived lactate has been demonstrated to fuel histone lactylation upon endothelial dysfunction, thus prompting the loss of EC identity. Nonetheless, flow-dependent increases in lactate and histone lactylation remain largely unexplored, and the role of this modification in dysfunctional ECs need to be further investigated. It should also be noted that the roles of altered metabolite levels to specifically modulate the epigenome cannot be assumed and remain to be demonstrated definitively in many cases. The direct impact of metabolism on the EC epigenome remains largely uninvestigated. However, given their interconnection, we believe it is reasonable to expect that shear stress-induced metabolic shifts impact the EC epigenome, paving the way to target metabolism as a therapeutic strategy. Lessons from cancer metabolism could be of use when studying the relationship between metabolism and epigenetics in ECs. The altered metabolism observed in cancer has increasingly been targeted by pharmacological inhibition. Indeed, targeting glycolysis, FAO and OXPHOS has successfully been shown to compromise cancer cell metabolism and hamper disease aggressiveness and progression (reviewed in ). Significantly, targeting glycolysis has been shown to be a viable strategy to mitigate the effects of atherosclerosis. Indeed, the inhibition of HIF-1α and PFKFB3 has been shown to reduce atherosclerosis in a mouse model . Also, increasing FAO by overexpressing PPAR-γ has shown to mitigate FA-induced endothelial dysfunction observed in obesity . Targeting ACLY has become a therapeutic target to limit FA and cholesterol biosynthesis, both of which require acetyl-CoA in the cytoplasm . Although disturbed flow, which by itself causes a significant degree of metabolic reprogramming, shifts ECs towards a dysfunctional state, and the effect of other risk factors on EC metabolism, such as hyperglycaemia, hypercholesterolaemia and obesity, should also be considered. Furthermore, despite the importance of ECs in maintaining vascular integrity, the contribution other cell types in the vasculature (VSMCs and immune cells) should not be ignored when addressing the pathogenesis of atherosclerosis . Indeed, mutations in DNMTs and TETs that occur during the clonal haematopoiesis of indeterminate potential (CHIP) have been reported to potentiate inflammation in macrophages, increasing the risk of atherosclerosis. Altogether, mechanotransduction is regulated by shear stress and is involved in the regulation of epigenetic-modifying enzymes, and we believe that metabolic alterations might fuel the epigenetic reprogramming that takes place upon exposure to disturbed flow, which is associated with endothelial dysfunction. However, intercellular communications need to be considered to provide a comprehensive overview of the development of endothelial dysfunction and atherosclerosis.
The influence of different factors on the bond strength of lithium disilicate-reinforced glass–ceramics to Resin: a machine learning analysis
65bf7951-a3d0-4929-9f88-4ad557f92e7b
11837651
Dentistry[mh]
Glass-based ceramics are the earliest ceramics used for fabricating dental restorations. Among the various all-ceramic materials, glass-based ceramics have a modulus of elasticity closest to that of natural teeth, as well as offer superior aesthetic performance and acceptable mechanical properties after bonding . Glass-based ceramics have been widely used in dental clinics, especially for the fabrication of veneers or inlays. The flexural strength of glass-based ceramics typically ranges from 120 to 480 MPa due to different chemical compositions and manufacturing techniques; however, it is not strong enough to directly withstand the bite force . Fortunately, the long-term performance of glass-based ceramic restorations is not inferior to that of metal oxide polycrystals like zirconia despite the higher mechanical strength of the polycrystals . This may be because glass-based ceramics can integrate with enamel and dentin through sufficiently firm and stable bonding and offer strong resistance to the bite force . Therefore, the bond performance is always a concern for glass-based ceramics. Several in vitro and in vivo experiments have been conducted over several decades to develop bonding strategies and corresponding bonding products for glass-based ceramics. However, the need for higher and more stable bonding outcomes, safer use, and simplified procedures persists, and novel candidates and methods are continually being developed. The in vitro bond strength test is the most commonly used method for evaluating the bonding performance of restoration materials. Traditional methods used to determine the bond strength of glass-based ceramics include shear bond strength, tensile bond strength, microshear bond strength, and microtensile bond strength tests, which assess the adhesive strength by applying shear or tensile forces to the bonded interface . Unfortunately, although this test has been relatively standardized through recommending testing methods and instruments, certain parameters such as the specimen size, loading speed, and shape of the bonding interface (ISO 29022:2013, ISO/TS 4640:2023, ISO/TS 16506:2017), efforts in standardizing the test are insufficient to eliminate the influence of uncontrollable factors. Variations in the brands of experimental equipment, testing environments, and operators could lead to unpredictable influences on the results . The values measured or conclusions drawn from the experiments may deviate significantly or contradict the actual values/ conclusions . Due to complicated sample preparation methods, the bond strength test requires much time and work . Moreover, the inherent limitations of low precision, low efficiency, low sensitivity, and uncontrollability associated with this traditional method cannot be avoided . Thus, it is difficult to directly compare the bonding results between the studies, even if the same product or similar strategies were investigated and clear conclusions were drawn in each study. The validity of each study should be affirmed. However, discrepancies in results among similar studies cannot be analyzed because they might be due to minor changes in the operators, operation process, or software and hardware conditions during the test . Consequently, determining which conclusion is closest to reality presents a challenge. Moreover, even though in vitro bond strength tests can be affected by several factors, such as the type of ceramic, surface treatment, and the composition of the primer, adhesive, and resin cement, it is not feasible to identify the most significant contributing factor to the bond strength of glass-based ceramics. Although traditional methods are widely used, they are associated with limitations such as complex sample preparation and the inability to account for all factors influencing the bond strength; this led us to opt for machine learning (ML) techniques in the current study. ML employs computational algorithms to identify patterns within high-dimensional datasets, enabling it to learn from empirical data and predict unseen outputs . Depending on the specific problem and data, ML can analyze the interactions between features at varying levels of complexity, offering a deeper understanding of the relationship and impact on the predicted outcome . While rooted in traditional data analysis, ML offers a distinct advantage over conventional experiments that require controlled variables. It excels in handling complex factors and exploring their interdependencies, leading to more accurate and holistic insights. Additionally, its predictive capabilities distinguish it from conventional bonding experiments. ML has been used to analyze the effect of dental adhesive components on the microtensile bond strength of the dentin and presented good accuracy and applicability in predicting results for small sample data sets . Most existing studies are limited to a narrow set of features and have not specifically focused on glass ceramics. In contrast, the current study examines a broader range of variables and explores how multiple interacting factors contribute to bond strength prediction. By exploring the synergistic effects of feature combinations and conducting feature importance analysis, this study aimed to analyze the effects of the ceramic processing technique, ceramic surface roughening, components of primer, adhesive, and resin cement on the bond strength of a typical glass-based ceramic, lithium disilicate-reinforced glass–ceramic using ML. The objective was to rank the significance and contributions of these reportedly known factors to the bonding of glass-based ceramics. Based on the new data set, we also sought to build ML models to predict the potential bond strength categories from the corresponding factors, which may aid in evaluating future bond products or strategies for glass-based ceramic. Data collection We conducted a comprehensive review of in vitro studies to evaluate the bonding performance of lithium disilicate-reinforced glass ceramics to resin cement. Thirty-six relevant studies were included based on predefined inclusion and exclusion criteria . Laboratory studies were included based on the following criteria: studies published in English; the bonding substrate was lithium disilicate-reinforced glass–ceramic and resin cement; light curing was used for polymerization; the manufacturer's instructions were followed during the bonding test process; the bonded specimens were stored in water for no more than 7 days before the measurement; bond strength values from different bond strength tests, including either (micro) shear bond strength or (micro) tensile bond strength test were collected. The following bond strength data were excluded from the current study: obtained from the dentin of human or other animal teeth or resin composites; obtained from studies involving other types of glass–ceramics; lacking initial values; obtained from long-term or aging bonding performance studies; and missing complete material information. A total of 36 studies were included. The manufacturing processes of lithium disilicate-reinforced glass–ceramics, surface roughening treatments, bonding procedures, compositions of primers/adhesives/resin cement (traced from the safety data sheets provided by the manufacturers), and the corresponding initial bond strength values were recorded in Supporting Information. Input feature selection Eight relevant and differentiating categories related to bonding strength were selected from the abovementioned initial data set based on the following factors: Manufacturing processes: milling (feature '1') or heat pressing (feature '0') Surface roughening: 'concentrations of Hydrofluoric (HF) acid' and 'etching time' Application of silane gamma-methacryloxypropyltrimethoxysilane (MPTS): to the primer (gamma-MPTSp), the adhesive (gamma-MPTSa), or both (gamma-MPTS) Application of 10-methacryloxydecyl dihydrogen phosphate (MDP): to the primer ('MDPp'), the adhesive ('MDPa'), the resin cement ('MDPc'), or combined application (two or three; 'MDP') Adhesive solvent: with (1) or without (0) organic solvent Adhesive resin matrix: bisphenol A-glycidyl methacrylate (Bis-GMA; BisGMAa), urethane dimethacrylate (UDMA; UDMAa), triethylene glycol dimethacrylate (TEGDMA; TEGDMAa), or Vitrebond copolymer (VCP) Resin cement resin matrix: Bis-GMA (BisGMAc), UDMA (UDMAc), or TEGDMA (TEGDMAc) Fillers in adhesive: with (1) or without (0) fillers The data set comprising 19 input features from the eight categories was subjected to feature importance analysis. The input features were streamlined before accuracy prediction to reduce the risk of model overfitting. ML model selection Given the relatively small range (0.18–47.46 MPa) and significant variation in bond strength values in the collected studies, a binary classification approach in supervised learning was utilized in the ML analysis. The obtained values were averaged to minimize the impact of differences between experimental values and classified into two categories based on the mean bond strength: low bond strength (< 23.75 MPa, labeled as 0) and high bond strength (≥ 23.75 MPa, labeled as 1). The study specifically utilized classification algorithms to predict bond strength classes. The classification models were developed on a Lenovo Legion desktop (Intel(R) Core (TM) i5-8300H CPU @ 2.30GHz, 4 cores, 8 gigabytes of RAM)). Nine ML algorithms from Python's scikit-learn library were applied and grouped according to their categories. The linear model used was logistic regression (LR). The instance-based learning algorithm included k-nearest neighbors (KNN), while the kernel-based approach was represented by a support vector machine (SVM). For tree-based and ensemble methods, decision tree (DT), extra trees (ET), random forest (RF), gradient boosting (GB), and extreme GB (XGB) were utilized. Lastly, the neural network model employed was the multilayer perceptron (MLP). This classification ensured a comprehensive evaluation of the bond strength prediction across different learning approaches. The constructed data set was then imported for analysis. Model training and evaluation Stratified cross-validation (CV), nested CV, and the function grid search CV were employed to identify the optimal hyperparameter combinations for each model and assess their performance. The models were evaluated using the area under the receiver operating characteristic (ROC) curve (AUC) and accuracy metrics. The ROC curve highlights the balance between true and false positive rates over different threshold values. The AUC quantifies the overall efficacy of a binary classifier, measuring the area under the ROC curve, with scores ranging from 0 to 1. Accuracy is determined as the proportion of correct predictions relative to the total predictions made. Feature importance analysis Six ML models (LR, ET, RF, GB, XGB and DT) were selected for feature importance analysis because the other three models do not support this function. Each model provides a feature importance score for every input feature, denoted as [12pt]{minimal} $$I_{i,j}$$ I i , j , where [12pt]{minimal} $$i$$ i = 1, 2,…,6 represents the model index and [12pt]{minimal} $$j$$ j = 1, 2,…,19 corresponds to the feature index. Given the differences in the scales of feature importance across models, the feature importance scores for each model were min–max normalized to ensure comparability. Normalization was performed using the min–max normalization approach, which adjusts the scores to a relative scale: [12pt]{minimal} $$I_{i,j}^{norm} = - I_{ ,i} }}{{I_{ ,i} - I_{ ,i} }},$$ I i , j norm = I i , j - I min , i I max , i - I min , i , where [12pt]{minimal} $$I_{i,j}^{norm}$$ I i , j norm is the normalized importance of the feature [12pt]{minimal} $$X_{i}$$ X i for model [12pt]{minimal} $$M_{i}$$ M i , [12pt]{minimal} $$I_{ ,i}$$ I min , i and [12pt]{minimal} $$I_{ ,i}$$ I max , i are the minimum and maximum importance scores for model [12pt]{minimal} $$M_{i}$$ M i , respectively . After normalization, the average importance score for each feature was calculated by taking the mean of the normalized importance scores across the six models: [12pt]{minimal} $$I_{j}^{avg} = _{i = 1}^{6} {I_{i,j}^{norm} },$$ I j avg = 1 6 ∑ i = 1 6 I i , j norm , where [12pt]{minimal} $$I_{j}^{avg}$$ I j avg represents the average importance score of the feature [12pt]{minimal} $$X_{i}$$ X i . The averaged importance scores were then ranked to determine the relative significance of each feature in predicting bond strength class. Features with higher average scores were considered more influential in the model predictions. The data set was reconstructed using the top five input features to create a streamlined feature set. A correlation analysis of variables in the new data set generated a matrix heatmap. The prediction accuracy was then evaluated using nine ML models to determine which model accurately predicts the bond strength class of glass-based ceramics. The aforementioned steps are summarized in Fig. , which provides a clear overview of the methodology used in this study. We conducted a comprehensive review of in vitro studies to evaluate the bonding performance of lithium disilicate-reinforced glass ceramics to resin cement. Thirty-six relevant studies were included based on predefined inclusion and exclusion criteria . Laboratory studies were included based on the following criteria: studies published in English; the bonding substrate was lithium disilicate-reinforced glass–ceramic and resin cement; light curing was used for polymerization; the manufacturer's instructions were followed during the bonding test process; the bonded specimens were stored in water for no more than 7 days before the measurement; bond strength values from different bond strength tests, including either (micro) shear bond strength or (micro) tensile bond strength test were collected. The following bond strength data were excluded from the current study: obtained from the dentin of human or other animal teeth or resin composites; obtained from studies involving other types of glass–ceramics; lacking initial values; obtained from long-term or aging bonding performance studies; and missing complete material information. A total of 36 studies were included. The manufacturing processes of lithium disilicate-reinforced glass–ceramics, surface roughening treatments, bonding procedures, compositions of primers/adhesives/resin cement (traced from the safety data sheets provided by the manufacturers), and the corresponding initial bond strength values were recorded in Supporting Information. Eight relevant and differentiating categories related to bonding strength were selected from the abovementioned initial data set based on the following factors: Manufacturing processes: milling (feature '1') or heat pressing (feature '0') Surface roughening: 'concentrations of Hydrofluoric (HF) acid' and 'etching time' Application of silane gamma-methacryloxypropyltrimethoxysilane (MPTS): to the primer (gamma-MPTSp), the adhesive (gamma-MPTSa), or both (gamma-MPTS) Application of 10-methacryloxydecyl dihydrogen phosphate (MDP): to the primer ('MDPp'), the adhesive ('MDPa'), the resin cement ('MDPc'), or combined application (two or three; 'MDP') Adhesive solvent: with (1) or without (0) organic solvent Adhesive resin matrix: bisphenol A-glycidyl methacrylate (Bis-GMA; BisGMAa), urethane dimethacrylate (UDMA; UDMAa), triethylene glycol dimethacrylate (TEGDMA; TEGDMAa), or Vitrebond copolymer (VCP) Resin cement resin matrix: Bis-GMA (BisGMAc), UDMA (UDMAc), or TEGDMA (TEGDMAc) Fillers in adhesive: with (1) or without (0) fillers The data set comprising 19 input features from the eight categories was subjected to feature importance analysis. The input features were streamlined before accuracy prediction to reduce the risk of model overfitting. Given the relatively small range (0.18–47.46 MPa) and significant variation in bond strength values in the collected studies, a binary classification approach in supervised learning was utilized in the ML analysis. The obtained values were averaged to minimize the impact of differences between experimental values and classified into two categories based on the mean bond strength: low bond strength (< 23.75 MPa, labeled as 0) and high bond strength (≥ 23.75 MPa, labeled as 1). The study specifically utilized classification algorithms to predict bond strength classes. The classification models were developed on a Lenovo Legion desktop (Intel(R) Core (TM) i5-8300H CPU @ 2.30GHz, 4 cores, 8 gigabytes of RAM)). Nine ML algorithms from Python's scikit-learn library were applied and grouped according to their categories. The linear model used was logistic regression (LR). The instance-based learning algorithm included k-nearest neighbors (KNN), while the kernel-based approach was represented by a support vector machine (SVM). For tree-based and ensemble methods, decision tree (DT), extra trees (ET), random forest (RF), gradient boosting (GB), and extreme GB (XGB) were utilized. Lastly, the neural network model employed was the multilayer perceptron (MLP). This classification ensured a comprehensive evaluation of the bond strength prediction across different learning approaches. The constructed data set was then imported for analysis. Stratified cross-validation (CV), nested CV, and the function grid search CV were employed to identify the optimal hyperparameter combinations for each model and assess their performance. The models were evaluated using the area under the receiver operating characteristic (ROC) curve (AUC) and accuracy metrics. The ROC curve highlights the balance between true and false positive rates over different threshold values. The AUC quantifies the overall efficacy of a binary classifier, measuring the area under the ROC curve, with scores ranging from 0 to 1. Accuracy is determined as the proportion of correct predictions relative to the total predictions made. Six ML models (LR, ET, RF, GB, XGB and DT) were selected for feature importance analysis because the other three models do not support this function. Each model provides a feature importance score for every input feature, denoted as [12pt]{minimal} $$I_{i,j}$$ I i , j , where [12pt]{minimal} $$i$$ i = 1, 2,…,6 represents the model index and [12pt]{minimal} $$j$$ j = 1, 2,…,19 corresponds to the feature index. Given the differences in the scales of feature importance across models, the feature importance scores for each model were min–max normalized to ensure comparability. Normalization was performed using the min–max normalization approach, which adjusts the scores to a relative scale: [12pt]{minimal} $$I_{i,j}^{norm} = - I_{ ,i} }}{{I_{ ,i} - I_{ ,i} }},$$ I i , j norm = I i , j - I min , i I max , i - I min , i , where [12pt]{minimal} $$I_{i,j}^{norm}$$ I i , j norm is the normalized importance of the feature [12pt]{minimal} $$X_{i}$$ X i for model [12pt]{minimal} $$M_{i}$$ M i , [12pt]{minimal} $$I_{ ,i}$$ I min , i and [12pt]{minimal} $$I_{ ,i}$$ I max , i are the minimum and maximum importance scores for model [12pt]{minimal} $$M_{i}$$ M i , respectively . After normalization, the average importance score for each feature was calculated by taking the mean of the normalized importance scores across the six models: [12pt]{minimal} $$I_{j}^{avg} = _{i = 1}^{6} {I_{i,j}^{norm} },$$ I j avg = 1 6 ∑ i = 1 6 I i , j norm , where [12pt]{minimal} $$I_{j}^{avg}$$ I j avg represents the average importance score of the feature [12pt]{minimal} $$X_{i}$$ X i . The averaged importance scores were then ranked to determine the relative significance of each feature in predicting bond strength class. Features with higher average scores were considered more influential in the model predictions. The data set was reconstructed using the top five input features to create a streamlined feature set. A correlation analysis of variables in the new data set generated a matrix heatmap. The prediction accuracy was then evaluated using nine ML models to determine which model accurately predicts the bond strength class of glass-based ceramics. The aforementioned steps are summarized in Fig. , which provides a clear overview of the methodology used in this study. In total, 193 bond strength values were collected from the lithium disilicate-reinforced glass–ceramics (Fig. ). The feature importance was evaluated using six ML models (LR, DT, RF, ET, GB, and XGB; Fig. ); the average feature importance is shown in Fig. . Based on the results of the feature importance analysis, the ranking of features contributing to bond strength, in order of importance, is as follows: HF concentration, HF etching time, gamma-MPTS in the primer, MDP in the primer, Bis-GMA in the cement, gamma-MPTS shared by both the primer and adhesive, TEGDMA in the cement, MDP shared by the primer, adhesive, and cement, the processing method of lithium disilicate-reinforced glass–ceramics, UDMA in the cement, Bis-GMA in the adhesive, gamma-MPTS in the adhesive, a solvent in the adhesive, Vitrebond copolymer in the adhesive, filler in the adhesive, MDP in the cement, MDP in the adhesive, UDMA in the adhesive, and TEGDMA in the adhesive. A five-feature data set was constructed from the five top-ranked features. Table presents a portion of the data set consisting of five features, while Fig. illustrates the correlations among the variables in the data. The HF etching time and concentration are represented as continuous variables, whereas the other input features are binary, with 1 indicating "yes" and 0 indicating "no." Similarly, the output is also binary, where 0 corresponds to a low bond strength class, and 1 represents a high bond strength class. Table shows the optimal hyperparameters obtained by grid search CV with stratified tenfold CV (the second column, random state set to 42 for all runs), the ROC curves with AUC scores (the third column, random state set to 42 for all runs), the CV results on AUC and accuracy scores (the fourth column, random state set to 42 for all runs) and the AUC of the stratified CV and nested CV (the fifth column, using random states ranging from 0 to 29) for all models of the five-feature data set. Stratified CV produced AUC scores of 0.71–0.93 and accuracy scores of 0.64–0.83 for the nine models at a random state of 42, with GB achieving the highest scores. Based on the AUC of the stratified CV and nested CV, XGB achieved superior model performance and accuracy. This study utilized six ML models to perform a feature importance analysis on the bond strength data for lithium disilicate-reinforced glass–ceramic collected from previous studies. The feature importance scores from the six ML models were normalized and averaged to rank the average importance scores. Features ranked higher are considered to contribute more significantly to the bond strength. Based on the feature importance analysis results, HF acid concentration, HF acid etching time, gamma-MPTS in the primer, MDP in the primer, and Bis-GMA in the cement were identified as the top five factors influencing bond strength. The bonding between glass-based ceramic and resin relies on the combined contributions of micro-mechanical interlocking and chemical adhesion . The fundamental composition of glass-based ceramics comprises glass matrix and crystalline phase. The glass component accounts for 30% to 70% of the total mass and provides transparency and processibility to the glass-based ceramics. The crystalline phase comprises microcrystals precipitated from the glass matrix during heat treatment, making up 30% to 70% of the total mass, and imparts higher strength and hardness to the glass-based ceramics. The dissolution of the glass phase in glass-based ceramics by HF is a well-established surface roughening strategy for glass-based ceramics . HF etching primarily achieves micro-mechanical interlocking. Horn et al. first demonstrated that HF effectively enhances the bond strength of glass-based ceramics to resin . Thus, HF etching has remained a necessary step in bonding glass-based ceramics and has not been replaced by other etching or roughening methods. The debate about the dominant role of micro-mechanical interlocking or chemical affinity in the bonding performance of glass-based ceramics is ongoing . The results of the current study clearly indicate that HF-related factors present a more significant advantage over other factors, suggesting that micro-mechanical interlocking has the most significant impact on the bond strength of glass-based ceramics. Silanization with gamma-MPTS is the most commonly used strategy to achieve chemical bonding between glass-based ceramics and resin . Although other silane coupling agents have been explored in previous studies, they have not surpassed gamma-MPTS . Tooth surface conditioning, ceramic surface conditioning, and cementing with resin cement are the three typical steps involved in bonding glass-based ceramic restorations. Products increasingly integrate multiple active components to eliminate the need for separate tooth or ceramic surface conditioning. Gamma-MPTS has become a common functional component of universal adhesives. However, whether the chemical coupling provided by the addition of gamma-MPTS to primers is comparable to that following its addition to adhesives is controversial; similarly, the effect of single or combined use of gamma-MPTS containing primers and adhesives on the bonding performance is debatable . The argument for the combined use is based on the low amount of gamma-MPTS in the adhesive. In the current study, using gamma-MPTS in the primer alone had a more significant impact on the bond strength, ranking much higher than the combined use with a silane-containing adhesive or using a silane-containing adhesive alone. Thus, an increase in the silane content does not necessarily lead to an unlimited improvement in bond strength. This is true even if the amount of gamma-MPTS in the adhesive is insufficient to achieve chemical coupling equivalent to that of gamma-MPTS in the primer. Therefore, the redundant application of gamma-MPTS in the clinical setting is unnecessary. MDP is known for its versatile bonding capability to the tooth structure, metals, and metal oxide ceramics, making it a key functional monomer in universal adhesives and self-adhesive resin cements. The addition of MDP in primers and other adhesive products is typically intended to enhance bonding to metal oxide ceramics . Unexpectedly, the feature importance analysis in the current study found that adding MDP to the primer significantly contributes to the bond strength. Nevertheless, acidic conditions facilitate the hydrolysis of gamma-MPTS ; hence, the acidic environment created by MDP promotes hydrolysis of gamma-MPTS, leading to stronger siloxane (-Si–O-Si-) bonds on the surface of silica-based ceramics. Moreover, MDP in primer exhibited a stronger impact on bond strength than MDP in adhesive and/or resin cement, which might be due to the lower proportion of MDP relative to gamma-MPTS in the adhesive and resin cement. Nonetheless, evidence supporting the beneficial role of MDP in the bonding to glass-based ceramics is lacking. The resin matrix of universal adhesives and self-adhesive resin cements typically includes Bis-GMA, or UDMA and TEGDMA, with Bis-GMA used most commonly. The findings of the current study indicate that Bis-GMA in resin cement significantly contributes to bond strength, whereas the resin matrixes in adhesives have a lower impact; thus, Bis-GMA-based resin cement should still be recommended for use. Other factors examined in this study, such as fillers and organic solvents in adhesives and cements, were found to have minimal contributions to bond strength. These findings suggest that functional monomers, rather than other components in adhesives or resin cements, play the most critical roles in influencing the bond strengths of glass-based ceramics. This study investigated two methods, milling and heat pressing, for processing lithium disilicate-reinforced glass–ceramics. Although these methods resulted in structural differences, they did not alter the chemical composition. Thus, the processing method did not influence the bond strength significantly. The results of the present study showed the advantages of analyzing the bond strength without interference from confounding factors. Due to experimental costs and workload, the in vitro bond strength test usually includes limited bonding materials and strategies; thus, the effectiveness cannot be guaranteed. For example, when assessing the bonding performance of a specific active component within a bonding product, it is difficult to evaluate whether the component is influenced by or interacts with other chemical components in the adhesive system, thereby affecting the bonding outcomes. Moreover, different bonding products have varying chemical compositions. Given the vast number of available adhesive products, it is not feasible to accurately screen the best adhesive product and bond strategy through traditional in vitro bond strength testing. Meta-analyses classify and synthesize previous studies to assess the magnitude and consistency of an effect and provide a quantitative evaluation. However, they cannot predict specific experimental data. Multiple studies involving in vitro experiments are conducted when a new product hits the market, often yielding conflicting results, and newer products may already have been introduced by the time a meta-analysis can be conducted, resulting in a seemingly endless repetitive process. Prediction is another essential function of ML. The use of ML to predict material properties involves three key components: a training data set, a set of features as model inputs, and an ML algorithm that maps these inputs to the desired target properties . These features must effectively capture the attributes related to the target property. The input features, processed through the trained models, yield a classification of bond strength. The size and complexity of the data set play a crucial role in determining ML model performance. In the case of glass-based ceramic bonding, data sets are generally smaller and more complex than those from other fields, leading to an increased risk of overfitting, which could result in poor generalization performance. Several strategies, including CV, ensemble learning, and feature engineering, were implemented in the current study to mitigate the risk of overfitting. Instead of reserving a separate test set, which could reduce the amount of training data available, various CV techniques, including k-fold CV (commonly using 5 or 10 folds), stratified CV, grid search CV, and nested CV, all of which are widely accepted methods for evaluating ML model performance, were implemented. Grid search CV along with stratified tenfold CV was employed to fine-tune the hyperparameters and assess model performance. Grid search CV, a method used to determine the optimal parameters of a model, relies on predefined parameter values. The results obtained from this method can be influenced by the chosen grid of values; therefore, we set the parameter values and ranges based on common recommendations from relevant literature and domain expertise to minimize this influence. The specific grid values used in this experiment are provided in the supporting information file. The nested CV was introduced to avoid overly optimistic outcomes from using the same CV method for parameter tuning and evaluation, thus providing a more rigorous performance evaluation. A random state can impact the CV results; therefore, we varied the random state from 0 to 29, repeating the stratified CV and nested CV 30 times. This generated a series of scores visualized as box plots to better understand the model's generalization in a small dataset. In addition, ensemble learning was applied to enhance model robustness and reduce overfitting. Specifically, four tree-based ensemble models (ET, RF, GB, and XGB) were used for binary classification on the newly constructed dataset. The ensemble approach provided strong generalization performance, demonstrated during the evaluation process . Feature engineering, incorporating techniques such as feature selection, domain knowledge integration, and feature importance analysis, was applied to identify the most relevant features and remove redundant ones, thus reducing model complexity and minimizing the risk of overfitting . Based on the objectives covered in previous literature, 19 input features related to influence factors were identified, which were then reduced to the top five features according to the feature importance analysis and the average importance scores that influence the bond strength of lithium disilicate-reinforced glass–ceramics. A new data set focusing on these five features was subsequently constructed, showing reasonable results and matching the corresponding knowledge about their influence on glass-based ceramic bonding. Adjustments to evaluation methods, the choice of ensemble models, and data set dimensionality reduction contributed to the improved effectiveness of ML models and ensured prediction accuracy . The XGB model is preferable when considering computational cost, training time, model complexity, data set size, and dimensionality. This model can be used to develop classification models that predict the bond strength categories of glass-based ceramics and resin matrixes, advancing the application of ML data-driven tools in dental bonding. For example, in the case of a new bonding strategy or a new product to be tested, the constructed XGB model can be used to predict the bond strength class as a reference for the in vitro bond strength test. By inputting information about the concentration and etching time of HF acid, the bond strength class will be outputted regardless of the presence or absence of gamma-MPTS in the primer, MDP in the primer, or Bis-GMA in the cement. However, the current predictive results are based on reported materials and bond strategies; therefore, the XGB model might be unavailable when the new bond products or strategies involve factors other than the 19 features investigated in the current study. One of the limitations of this study is that information about the initial bond strength values only were collected without including values accrued from artificial aging. The varying aging protocols used in different studies led to inconsistencies in the aging intensity, making data analysis impossible. Additionally, although feldspathic-porcelain, leucite-reinforced glass–ceramics, and lithium disilicate-reinforced glass–ceramics are all familiar glass-based ceramics, there is limited information about the bonding performances of leucite-reinforced glass–ceramics and feldspathic-porcelain. Hence, achieving satisfactory outcomes in the characteristic analysis or bond strength predictions was insufficient. Therefore, the efficiency of the XGB model in predicting the bond strength range of other types of glass-based ceramic needs to be evaluated in the future. Research significance This study aimed to address the limitations of traditional in vitro bond strength testing in glass-based ceramics using ML models. This study lays the foundation for further expanding ML applications in the dental materials field. Future research could explore the use of ML to predict the bond strengths of other ceramic materials, such as zirconia, thereby expanding the scope of ML applications and developing a more universal bond strength prediction model. Additionally, future studies could explore additional factors or refine existing ones—such as replacing the general acid etching time with more specific etching parameters—while maintaining model performance. This would provide more detailed guidance for product development and clinical applications. Further advancements in this area could significantly enhance the precision and applicability of bond strength predictions in the field of dental materials. This study aimed to address the limitations of traditional in vitro bond strength testing in glass-based ceramics using ML models. This study lays the foundation for further expanding ML applications in the dental materials field. Future research could explore the use of ML to predict the bond strengths of other ceramic materials, such as zirconia, thereby expanding the scope of ML applications and developing a more universal bond strength prediction model. Additionally, future studies could explore additional factors or refine existing ones—such as replacing the general acid etching time with more specific etching parameters—while maintaining model performance. This would provide more detailed guidance for product development and clinical applications. Further advancements in this area could significantly enhance the precision and applicability of bond strength predictions in the field of dental materials. Based on the results of the analysis and within the limitations of this study, the following conclusions can be drawn: HF concentration and etching time, gamma-MPTS or 10-MDP in the primer, and Bis-GMA in the cement are the top five features contributing to lithium disilicate-reinforced glass ceramics. Micro-mechanical interlocking provided by HF acid etching has the most significant impact on the bonding of glass-based ceramics. Conditioning the ceramic surface with gamma-MPTS-containing primer is important for chemical bonding; alternatively, combining gamma-MPTS-containing adhesives or resin cements is unnecessary. MDP in primer influences the bond strength of glass-based ceramics. Bis-GMA-based resin cements are recommended; solvents and resin matrixes of adhesives and fillers of adhesives and cements contributed less to the bonding of glass-based ceramics. ML with CV, feature engineering, and ensemble learning can predict the range of the bond strength values of glass-based ceramics. XGB is a suitable model for predicting the bond strength values of glass-based ceramics, with an accuracy rate of 0.82 and an AUC score of 0.92, outperforming other models like LR, ET, RF, and GB. Supplementary Material 1.
Filtering out the noise: metagenomic classifiers optimize ancient DNA mapping
f668fce1-27a8-4bac-9a46-95d95407cb56
11646131
Forensic Medicine[mh]
The field of paleogenomics relies on degraded ancient DNA (aDNA) molecules extracted from historic or prehistoric biological remains to study past environments and populations. Major progress in molecular methods to isolate and sequence aDNA has enabled the recovery of high-quality ancient genomes from many different species and sources . However, contamination from modern and ancient exogenous sources remains a challenge that requires attention to improve the reliability and interpretative power of paleogenomic research. Sample exposure to contaminating sources of DNA happens at various stages, including microorganisms and environmental DNA in the soil matrix; DNA from people who collected and handled samples in the field and/or museums and performed laboratory work ; and cross-contamination from different samples in the lab during DNA extraction and contamination from DNA sequences in reagents and consumables . In short, a complex mixture of ancient and modern DNA contaminants continuously accumulates in and on the sample from the time of death of the organism up to laboratory work. As a result, endogenous DNA content is often outcompeted by exogenous DNA contaminants in sequenced data . In the last decade, continuing improvements in laboratory protocols and established best practice guidelines have specifically addressed the issue of contamination in aDNA sequence data. From the excavation to library preparation and target enrichment, stringent measures are applied to minimize, identify, or discard exogenous DNA contaminants during sample handling and laboratory work . Beyond these practical advances, sequencing data processing could assist with managing DNA contaminants. Currently, computational methods rely on the specificity of mapping shotgun sequences to a linear reference genome of, or closely related to, the species of interest . However, spurious mapping of exogenous sequences to the target reference increases with decreasing fragment length and if the exogenous sequences come from a species closely related to the target reference. Recently, Feuerborn et al. suggested the use of competitive mapping to remove human contamination from faunal aDNA datasets by mapping aDNA sequences against a composite reference sequence file containing both human and target reference genomes simultaneously. This technique is traditionally used in microbial genomics . However, competitive mapping only considers a few sources of contamination and is not easily scalable to target multiple complex eukaryotic organisms due to increasing computational demand with larger composite reference genomes. Therefore, exogenous contamination not only complicates aDNA analysis but also intensifies the computational demands during sequence mapping. Postmapping filtering tools such as PMDtools rely on the presence of aDNA damage misincorporations to remove contemporary contamination. However, not all endogenous aDNA reads present base misincorporations characteristic of aDNA damage, so this approach can lead to a large loss of endogenous sequences. Moreover, contaminating sequences from exogenous aDNA would not be removed by this approach. In short, efficient computational methods are lacking to not only identify but also efficiently remove contaminant DNA during the mapping process of aDNA datasets. In recent years, the field of metagenomics has given rise to metagenomic classifiers capable of efficiently and accurately identifying diverse taxa in sequence data. These capabilities have shown potential in ancient metagenomic studies as well, although with specific caveats for each tool . We hypothesize that metagenomic classifiers offer an efficient approach to removing contaminant DNA by filtering them out before the mapping stage. As a result, we predict improved mapping accuracy and a significant reduction in computational resources needed, thereby making aDNA analysis more accurate and accessible across various computing platforms. A similar approach has been successfully applied to remove human patient DNA from clinical metagenomic data . Kraken2 is a k -mer-based classifier initially designed to perform metagenomic analyses. Here, we propose an approach where Kraken2 is used to identify and remove contaminating sequences from ancient DNA datasets of single organisms to accelerate the mapping process and improve mapping accuracy. We opted for a k -mer-based metagenomic classifier over alignment-based methods due to its faster processing speed , and we chose Kraken2 because it presented the best balance between speed, database size, and classification accuracy compared to other metagenomic classifiers , especially in ancient DNA contexts . Using both simulated (human and dog) and empirical shotgun aDNA datasets, we show that this workflow presents a simple and efficient method that enables the removal of contaminating sequences from aDNA datasets with limited loss of endogenous DNA sequences while simultaneously reducing the overall computational resources needed during the mapping process as well as mitigating any potential errors introduced by spuriously mapping contaminant reads. Data simulations We simulated ancient human ( Homo sapiens ) and dog ( Canis lupus familiaris ) shotgun sequencing datasets with varying levels of contamination using Gargammel 1.1.2 . The human and dog reads, referred to as endogenous, were simulated from autosomal, sex, and mitochondrial contigs of GRCh38.p14 (GCF_000001405.40) and CanFam6 (GCF_000002285.5) genome assemblies, respectively. The exogenous contaminants consisted of modern human reads, represented by reads simulated from the same GRCh38.p14 genome assembly; microbial contamination, represented by reads simulated from profiled microbial communities as presented by Seguin-Orlando et al. , including bacteria, viruses, and phages; and other contaminating reads representing common sources of contaminants found in aDNA datasets : sheep ( ARS-UI_Ramb_v2.0 ; GCF_016772045.1), domestic cattle ( ARS-UCD1.3 ; GCF_002263795.2), pig ( Sscrofa11.1 ; GCF_000003025.6), goat ( ARS1.2 ; GCF_001704415.2), and chicken ( GRCg6a ; GCF_000002315.6). Deamination profiles were simulated from the Loschbour individual in Lazaridis et al. for the endogenous and microbial reads, simulating the damage profile of a single-stranded aDNA library partially treated with uracil–DNA–glycosylase (UDG) . Finally, all the reads were simulated as paired-end Illumina HiSeqX reads with a read length of 75 bp and size distribution of the sequenced fragments simulated from the subset of a 45 000-year-old human sample from Siberia . Default Gargammel settings were used for the rest, including base quality distribution of the simulated bases and adding Illumina adapter sequences for fragments shorter than the read length (75 bp). We simulated two scenarios with differing levels of modern human and other contamination in the dataset . For the ancient human dataset, the modern human portion was replaced with microbial sequences since a taxonomic classifier such as Kraken2 will not be able to differentiate between modern and ancient reads from the same taxa. The endogenous proportion of the simulated reads ranged from 0.1% to 60%, with 20 million read pairs simulated for each . Data processing Preprocessing of simulated data The simulated datasets were processed with AdapterRemoval 2.3.2 to trim adapters and merge paired-end reads with length and base quality filters ‘ --minlength 30 ’ and ‘ --minquality 20 ’, respectively . Additionally, ‘ --qualitymax ’ was set to 64 to account for maximum read quality in the datasets simulated using Gargammel . Data processing for baseline mapping performance We mapped the merged reads using bwa aln 0.7.17-r1188 with the ‘ -n 0.01 ’, ‘ -l 1024 ’ and ‘ -o 2 ’ ancient DNA parameters . The human and dog datasets were mapped to GRCh38.p14 and CanFam6 genome assemblies, respectively. The mapped files were used to establish the baseline performance of bwa aln for all simulated endogenous levels. Measuring baseline mapping performance To measure the mapping performance, the reads were then categorized as ‘true positives’ ( [12pt]{minimal} $$ ), ‘true negatives’ ( [12pt]{minimal} $$ ), ‘false positives’ ( [12pt]{minimal} $$ ), and ‘false negatives’ ( [12pt]{minimal} $$ ) depending on the source taxonomy (endogenous or contamination) of the mapped read and its mapping quality to the reference genome . These were used to calculate precision, recall, and f-measure to establish the baseline mapping performance . [12pt]{minimal} $=}{+}$ measures the proportion of correctly predicted positive instances out of all instances predicted as positive. In this context, precision quantifies the ratio of endogenous reads to all reads mapped to the reference. [12pt]{minimal} $=}{+}$ , also known as sensitivity or the true positive rate, measures the proportion of correctly identified positive instances from all positive instances in the dataset. In this context, recall quantifies the ratio of endogenous reads mapped to the reference to all endogenous reads in the dataset. f - measure (or f -score): [12pt]{minimal} ${F}_1=2 }{+}$ is the harmonic mean of precision and recall . It is typically used to quantify the overall performance of a classification model since optimizing only for precision or recall can have conflicting goals. A high [12pt]{minimal} ${F}_1$ value suggests accurate identification of positive instances while also minimizing false positives. In this context, [12pt]{minimal} ${F}_1$ quantifies a method’s ability to identify endogenous reads and contaminants in the dataset accurately. Measuring the performance of competitive mapping For the dog dataset, we also benchmarked competitive mapping using a composite reference with the CanFam6 and GRCh38.p14 assemblies as suggested by Feuerborn et al. , which has been shown to remove contemporary human contamination in ancient faunal datasets. We then mapped the reads using the composite reference and categorized reads according to Supplementary to calculate precision , recall , and f-measure . Metagenomic classification Measuring the performance of metagenomic filtering before mapping We used Kraken2 (v 2.1.3) , a k -mer-based method for taxonomic classification of sequence data. A k -mer refers to substrings of length k within a nucleotide sequence. Kraken2 relies on the presence of exact l -mer (a subsequence of length l , where l ≤ k ) matches between sequence data and a reference database containing known sequences and taxonomies to perform taxonomic classification. To understand the effect of database composition on taxonomic classification, we created different databases that contained single species or sequences from multiple domains of life . All databases were built with the default k -mer length of 35, as well as the k -mer length of 29. This choice of k -mer lengths is motivated by the fact that while a longer k -mer (i.e. 35) decreases the risk of false classification compared to a shorter k -mer (i.e. 29) , aDNA datasets generally use a cut-off of 30 bp for the minimal fragment length for mapping to a reference genome . Therefore, a k -mer of 29 is likely more appropriate in the context of aDNA datasets to prevent a bias against very short reads. Finally, we also tested a publicly available database, k2_nt_20230502 ( https://benlangmead.github.io/aws-indexes/k2 ), which includes a larger collection of sequences across the three domains of life and viruses from the National Centre for Biotechnology Information (NCBI), inclusive of GenBank , RefSeq, Third Party Annotation (TPA), and Protein Data Bank (PDB). We evaluated databases based on their size and sensitivity when classifying endogenous sequences. The best-performing databases (see results) were used to filter reads from the simulated dataset before mapping. The filtered reads were then mapped to the reference with the same method as the baseline above. Finally, precision , recall , and [12pt]{minimal} ${F}_1$ were calculated after reads were categorized based on metagenomic filtering and mapping as per and . Building Kraken2 databases The k -mer 35 databases were built with the default options using genomes described in . The k -mer 29 database was built with options ‘ —kmer-len 29 ’, ‘ —minimizer-len 24 ’, and ‘ —minimizer-spaces 6 ’ and included genomes as described in . Kraken2 filtering The Kraken2 classifications were run with default parameters. A nextflow pipeline is available on github ( https://github.com/shyama-mama/taxonomicfiltering ) to perform filtering given a database and input reads. Empirical data To validate our method with empirical data, we selected 10 ancient Canis samples from Bergström et al . at similar endogenous proportions to the simulated data . We mapped the data with bwa aln to the CanFam6 reference genome following the parameters in the Data Processing for Baseline Mapping Performance section. We built a Kraken2 database with a k -mer length set to 29 and composed of reference sequences from dog ( CanFam6 ; GCF_000002285.5), grey wolf ( mCanLor1.2 ; GCA_905319855.2), and dingo ( ASM325472v2 ; GCF_003254725.2). Additionally, we also added a consensus CanFam3.1 (GCF_000002285.3) reference using the alternate allele from bi-allelic single-nucleotide polymorphisms (SNPs) from 722 Canidae genomes from into the database to minimize reference bias . We used databases with and without the alternate allele information for filtering. We filtered the data by discarding any unclassified reads before mapping with bwa aln as above and compared both approaches: mapping only (Sample BWA ) and filtering before mapping (Sample Filt ). To understand the effect of filtering on the data, for each sample, we extracted reads that met the following criteria: they mapped to the reference with a mapping quality score >20 when no premapping metagenomic filtering was performed, and they were also removed by Kraken2 filtering when metagenomic filtering was performed. These reads were then classified using Nucleotide BLAST v2.14.1 . We used the MEGAN v6.24.20 suite of tools to get taxonomic abundances for each sample using the weighted lowest common ancestor (LCA) algorithm and following best-suited options for ancient DNA as suggested by Eisenhofer and Weyrich . Since Kraken2’s taxonomic assignment relies on exact l -mer matches (where l ≤ k ) to sequences in the database, there is a possibility to introduce reference bias when filtering relies on identifying endogenous sequences in the dataset. Hence, we used f 4 statistics from ADMIXTOOLS v7.0.2 to assess if endogenous reads that were removed during Kraken2 filtering introduced any bias in downstream analysis. The f 4 was performed with the configuration f 4 (Sample Filt , Sample BWA ; Basenji01, Coyote01California), where Coyote01California and Basenji01, are coyote and dog, respectively, from Plassais et al . . The effect of reference bias was tested for databases with and without alternate allele information. Pseudo-haploid genotype calls were generated for each sample using PileupCaller 1.5.2 ( https://github.com/stschiff/sequenceTools ) at polymorphic loci ascertained using heterozygous sites in the coyote genome mapped to the CanFam6 reference. To account for variability during random allele sampling, we replicated pseudo-haploid genotype calls three times per sample. We tested each sample for reference bias caused by filtering where a significant deviation from 0 in the positive direction indicates an introduced bias towards the reference due to filtering. We simulated ancient human ( Homo sapiens ) and dog ( Canis lupus familiaris ) shotgun sequencing datasets with varying levels of contamination using Gargammel 1.1.2 . The human and dog reads, referred to as endogenous, were simulated from autosomal, sex, and mitochondrial contigs of GRCh38.p14 (GCF_000001405.40) and CanFam6 (GCF_000002285.5) genome assemblies, respectively. The exogenous contaminants consisted of modern human reads, represented by reads simulated from the same GRCh38.p14 genome assembly; microbial contamination, represented by reads simulated from profiled microbial communities as presented by Seguin-Orlando et al. , including bacteria, viruses, and phages; and other contaminating reads representing common sources of contaminants found in aDNA datasets : sheep ( ARS-UI_Ramb_v2.0 ; GCF_016772045.1), domestic cattle ( ARS-UCD1.3 ; GCF_002263795.2), pig ( Sscrofa11.1 ; GCF_000003025.6), goat ( ARS1.2 ; GCF_001704415.2), and chicken ( GRCg6a ; GCF_000002315.6). Deamination profiles were simulated from the Loschbour individual in Lazaridis et al. for the endogenous and microbial reads, simulating the damage profile of a single-stranded aDNA library partially treated with uracil–DNA–glycosylase (UDG) . Finally, all the reads were simulated as paired-end Illumina HiSeqX reads with a read length of 75 bp and size distribution of the sequenced fragments simulated from the subset of a 45 000-year-old human sample from Siberia . Default Gargammel settings were used for the rest, including base quality distribution of the simulated bases and adding Illumina adapter sequences for fragments shorter than the read length (75 bp). We simulated two scenarios with differing levels of modern human and other contamination in the dataset . For the ancient human dataset, the modern human portion was replaced with microbial sequences since a taxonomic classifier such as Kraken2 will not be able to differentiate between modern and ancient reads from the same taxa. The endogenous proportion of the simulated reads ranged from 0.1% to 60%, with 20 million read pairs simulated for each . Preprocessing of simulated data The simulated datasets were processed with AdapterRemoval 2.3.2 to trim adapters and merge paired-end reads with length and base quality filters ‘ --minlength 30 ’ and ‘ --minquality 20 ’, respectively . Additionally, ‘ --qualitymax ’ was set to 64 to account for maximum read quality in the datasets simulated using Gargammel . Data processing for baseline mapping performance We mapped the merged reads using bwa aln 0.7.17-r1188 with the ‘ -n 0.01 ’, ‘ -l 1024 ’ and ‘ -o 2 ’ ancient DNA parameters . The human and dog datasets were mapped to GRCh38.p14 and CanFam6 genome assemblies, respectively. The mapped files were used to establish the baseline performance of bwa aln for all simulated endogenous levels. Measuring baseline mapping performance To measure the mapping performance, the reads were then categorized as ‘true positives’ ( [12pt]{minimal} $$ ), ‘true negatives’ ( [12pt]{minimal} $$ ), ‘false positives’ ( [12pt]{minimal} $$ ), and ‘false negatives’ ( [12pt]{minimal} $$ ) depending on the source taxonomy (endogenous or contamination) of the mapped read and its mapping quality to the reference genome . These were used to calculate precision, recall, and f-measure to establish the baseline mapping performance . [12pt]{minimal} $=}{+}$ measures the proportion of correctly predicted positive instances out of all instances predicted as positive. In this context, precision quantifies the ratio of endogenous reads to all reads mapped to the reference. [12pt]{minimal} $=}{+}$ , also known as sensitivity or the true positive rate, measures the proportion of correctly identified positive instances from all positive instances in the dataset. In this context, recall quantifies the ratio of endogenous reads mapped to the reference to all endogenous reads in the dataset. f - measure (or f -score): [12pt]{minimal} ${F}_1=2 }{+}$ is the harmonic mean of precision and recall . It is typically used to quantify the overall performance of a classification model since optimizing only for precision or recall can have conflicting goals. A high [12pt]{minimal} ${F}_1$ value suggests accurate identification of positive instances while also minimizing false positives. In this context, [12pt]{minimal} ${F}_1$ quantifies a method’s ability to identify endogenous reads and contaminants in the dataset accurately. Measuring the performance of competitive mapping For the dog dataset, we also benchmarked competitive mapping using a composite reference with the CanFam6 and GRCh38.p14 assemblies as suggested by Feuerborn et al. , which has been shown to remove contemporary human contamination in ancient faunal datasets. We then mapped the reads using the composite reference and categorized reads according to Supplementary to calculate precision , recall , and f-measure . The simulated datasets were processed with AdapterRemoval 2.3.2 to trim adapters and merge paired-end reads with length and base quality filters ‘ --minlength 30 ’ and ‘ --minquality 20 ’, respectively . Additionally, ‘ --qualitymax ’ was set to 64 to account for maximum read quality in the datasets simulated using Gargammel . We mapped the merged reads using bwa aln 0.7.17-r1188 with the ‘ -n 0.01 ’, ‘ -l 1024 ’ and ‘ -o 2 ’ ancient DNA parameters . The human and dog datasets were mapped to GRCh38.p14 and CanFam6 genome assemblies, respectively. The mapped files were used to establish the baseline performance of bwa aln for all simulated endogenous levels. To measure the mapping performance, the reads were then categorized as ‘true positives’ ( [12pt]{minimal} $$ ), ‘true negatives’ ( [12pt]{minimal} $$ ), ‘false positives’ ( [12pt]{minimal} $$ ), and ‘false negatives’ ( [12pt]{minimal} $$ ) depending on the source taxonomy (endogenous or contamination) of the mapped read and its mapping quality to the reference genome . These were used to calculate precision, recall, and f-measure to establish the baseline mapping performance . [12pt]{minimal} $=}{+}$ measures the proportion of correctly predicted positive instances out of all instances predicted as positive. In this context, precision quantifies the ratio of endogenous reads to all reads mapped to the reference. [12pt]{minimal} $=}{+}$ , also known as sensitivity or the true positive rate, measures the proportion of correctly identified positive instances from all positive instances in the dataset. In this context, recall quantifies the ratio of endogenous reads mapped to the reference to all endogenous reads in the dataset. f - measure (or f -score): [12pt]{minimal} ${F}_1=2 }{+}$ is the harmonic mean of precision and recall . It is typically used to quantify the overall performance of a classification model since optimizing only for precision or recall can have conflicting goals. A high [12pt]{minimal} ${F}_1$ value suggests accurate identification of positive instances while also minimizing false positives. In this context, [12pt]{minimal} ${F}_1$ quantifies a method’s ability to identify endogenous reads and contaminants in the dataset accurately. For the dog dataset, we also benchmarked competitive mapping using a composite reference with the CanFam6 and GRCh38.p14 assemblies as suggested by Feuerborn et al. , which has been shown to remove contemporary human contamination in ancient faunal datasets. We then mapped the reads using the composite reference and categorized reads according to Supplementary to calculate precision , recall , and f-measure . Measuring the performance of metagenomic filtering before mapping We used Kraken2 (v 2.1.3) , a k -mer-based method for taxonomic classification of sequence data. A k -mer refers to substrings of length k within a nucleotide sequence. Kraken2 relies on the presence of exact l -mer (a subsequence of length l , where l ≤ k ) matches between sequence data and a reference database containing known sequences and taxonomies to perform taxonomic classification. To understand the effect of database composition on taxonomic classification, we created different databases that contained single species or sequences from multiple domains of life . All databases were built with the default k -mer length of 35, as well as the k -mer length of 29. This choice of k -mer lengths is motivated by the fact that while a longer k -mer (i.e. 35) decreases the risk of false classification compared to a shorter k -mer (i.e. 29) , aDNA datasets generally use a cut-off of 30 bp for the minimal fragment length for mapping to a reference genome . Therefore, a k -mer of 29 is likely more appropriate in the context of aDNA datasets to prevent a bias against very short reads. Finally, we also tested a publicly available database, k2_nt_20230502 ( https://benlangmead.github.io/aws-indexes/k2 ), which includes a larger collection of sequences across the three domains of life and viruses from the National Centre for Biotechnology Information (NCBI), inclusive of GenBank , RefSeq, Third Party Annotation (TPA), and Protein Data Bank (PDB). We evaluated databases based on their size and sensitivity when classifying endogenous sequences. The best-performing databases (see results) were used to filter reads from the simulated dataset before mapping. The filtered reads were then mapped to the reference with the same method as the baseline above. Finally, precision , recall , and [12pt]{minimal} ${F}_1$ were calculated after reads were categorized based on metagenomic filtering and mapping as per and . Building Kraken2 databases The k -mer 35 databases were built with the default options using genomes described in . The k -mer 29 database was built with options ‘ —kmer-len 29 ’, ‘ —minimizer-len 24 ’, and ‘ —minimizer-spaces 6 ’ and included genomes as described in . Kraken2 filtering The Kraken2 classifications were run with default parameters. A nextflow pipeline is available on github ( https://github.com/shyama-mama/taxonomicfiltering ) to perform filtering given a database and input reads. We used Kraken2 (v 2.1.3) , a k -mer-based method for taxonomic classification of sequence data. A k -mer refers to substrings of length k within a nucleotide sequence. Kraken2 relies on the presence of exact l -mer (a subsequence of length l , where l ≤ k ) matches between sequence data and a reference database containing known sequences and taxonomies to perform taxonomic classification. To understand the effect of database composition on taxonomic classification, we created different databases that contained single species or sequences from multiple domains of life . All databases were built with the default k -mer length of 35, as well as the k -mer length of 29. This choice of k -mer lengths is motivated by the fact that while a longer k -mer (i.e. 35) decreases the risk of false classification compared to a shorter k -mer (i.e. 29) , aDNA datasets generally use a cut-off of 30 bp for the minimal fragment length for mapping to a reference genome . Therefore, a k -mer of 29 is likely more appropriate in the context of aDNA datasets to prevent a bias against very short reads. Finally, we also tested a publicly available database, k2_nt_20230502 ( https://benlangmead.github.io/aws-indexes/k2 ), which includes a larger collection of sequences across the three domains of life and viruses from the National Centre for Biotechnology Information (NCBI), inclusive of GenBank , RefSeq, Third Party Annotation (TPA), and Protein Data Bank (PDB). We evaluated databases based on their size and sensitivity when classifying endogenous sequences. The best-performing databases (see results) were used to filter reads from the simulated dataset before mapping. The filtered reads were then mapped to the reference with the same method as the baseline above. Finally, precision , recall , and [12pt]{minimal} ${F}_1$ were calculated after reads were categorized based on metagenomic filtering and mapping as per and . The k -mer 35 databases were built with the default options using genomes described in . The k -mer 29 database was built with options ‘ —kmer-len 29 ’, ‘ —minimizer-len 24 ’, and ‘ —minimizer-spaces 6 ’ and included genomes as described in . The Kraken2 classifications were run with default parameters. A nextflow pipeline is available on github ( https://github.com/shyama-mama/taxonomicfiltering ) to perform filtering given a database and input reads. To validate our method with empirical data, we selected 10 ancient Canis samples from Bergström et al . at similar endogenous proportions to the simulated data . We mapped the data with bwa aln to the CanFam6 reference genome following the parameters in the Data Processing for Baseline Mapping Performance section. We built a Kraken2 database with a k -mer length set to 29 and composed of reference sequences from dog ( CanFam6 ; GCF_000002285.5), grey wolf ( mCanLor1.2 ; GCA_905319855.2), and dingo ( ASM325472v2 ; GCF_003254725.2). Additionally, we also added a consensus CanFam3.1 (GCF_000002285.3) reference using the alternate allele from bi-allelic single-nucleotide polymorphisms (SNPs) from 722 Canidae genomes from into the database to minimize reference bias . We used databases with and without the alternate allele information for filtering. We filtered the data by discarding any unclassified reads before mapping with bwa aln as above and compared both approaches: mapping only (Sample BWA ) and filtering before mapping (Sample Filt ). To understand the effect of filtering on the data, for each sample, we extracted reads that met the following criteria: they mapped to the reference with a mapping quality score >20 when no premapping metagenomic filtering was performed, and they were also removed by Kraken2 filtering when metagenomic filtering was performed. These reads were then classified using Nucleotide BLAST v2.14.1 . We used the MEGAN v6.24.20 suite of tools to get taxonomic abundances for each sample using the weighted lowest common ancestor (LCA) algorithm and following best-suited options for ancient DNA as suggested by Eisenhofer and Weyrich . Since Kraken2’s taxonomic assignment relies on exact l -mer matches (where l ≤ k ) to sequences in the database, there is a possibility to introduce reference bias when filtering relies on identifying endogenous sequences in the dataset. Hence, we used f 4 statistics from ADMIXTOOLS v7.0.2 to assess if endogenous reads that were removed during Kraken2 filtering introduced any bias in downstream analysis. The f 4 was performed with the configuration f 4 (Sample Filt , Sample BWA ; Basenji01, Coyote01California), where Coyote01California and Basenji01, are coyote and dog, respectively, from Plassais et al . . The effect of reference bias was tested for databases with and without alternate allele information. Pseudo-haploid genotype calls were generated for each sample using PileupCaller 1.5.2 ( https://github.com/stschiff/sequenceTools ) at polymorphic loci ascertained using heterozygous sites in the coyote genome mapped to the CanFam6 reference. To account for variability during random allele sampling, we replicated pseudo-haploid genotype calls three times per sample. We tested each sample for reference bias caused by filtering where a significant deviation from 0 in the positive direction indicates an introduced bias towards the reference due to filtering. Effect of database composition on taxonomic classifications Based on simulated ancient human and dog reads with no introduced contamination, we demonstrate that database composition significantly impacts classification accuracy . Databases consisting of only a single genome exhibit a bias towards classifying sequences as belonging to that genome’s taxonomy. Conversely, more complex databases may compromise taxonomic resolution—i.e. the ability to classify sequences at the lowest specific taxonomic rank, as previously observed by Nasko et al . . We observed a substantial reduction in dog reads misclassified as human from 3.17% to 0.12% and 52.52% to 3.34% for k -mer lengths 35 and 29, respectively, when using ‘ k2_human_dog ’, built from human and dog reference genomes, compared to ‘ k2_human ’, built with only the human reference . Using the combined database led to some reads classified as Boreoeutheria (0.26% and 1.77% for k -mer lengths 35 and 29, respectively), a classification rank that includes humans and dogs. Similarly, Canis sequences in ‘ k2_nt_20230502 ’ led to dog reads classified primarily as Canis or C. lupus , and the presence of primate genomes in the ‘ k2_custom ’ database led to the classification of human reads as Hominidae or Homininae . We also note a substantial number of human reads classified as ‘root’ (22.22%) when using ‘ k2_nt_20230502 ’. A ‘root’ classification is a result of the query sequence matching with viral and cellular organisms in the database . It is unclear if this is a result of contaminated viral sequences in the database or the presence of human endogenous retroviral sequences. Databases built with k -mer length 29 decrease the size of the database and the number of unclassified reads—i.e. reads that could not be assigned a taxonomy . Kraken2 assigns a classification to a read-only when the read contains unique l -mers (a subsequence of length l , where l ≤ k ) that exactly match the l -mer sequences in the database . Hence, databases built with a k -mer length of 29 assign taxonomies to shorter reads and reads with misincorporations due to damage better than the larger default k -mer length . However, this comes at the price of lower taxonomic resolution, with increased ancient dog and human reads classified as ‘cellular organism’ or ‘Eukaryota’ (21.38% and 22.31% for dog and human reads, respectively, using ‘ k2_custom ’), as well as increased false classifications as the simulated reads were misclassified as other vertebrate species in the database (>10% in dog reads using ‘ k2_custom ’). ​​​​. Benchmarking filtering before mapping Filtering before mapping using metagenomic classifiers primarily aims to retain as many endogenous reads as possible while discarding contaminating sequences. However, sensible taxonomic classification is highly dependent on taxa represented in the database, regardless of database complexity. As we showed above, the rate of classification bias is driven by the k -mer length used to build the database, with lower specificity due to a shorter k -mer length leading to more aberrant classifications. Therefore, we investigated two strategies that take into account the strengths and drawbacks of different types of databases. First, we applied filtering based on identifying contaminants using databases built with k -mer 35 and consisting of sequences from the target taxa and as many, as possible, contaminating genomes, to remove as many contaminating sequences as possible (negative filtering). We predict that this strategy is well suited for when the database is built with larger k -mer sizes, which show substantially lower false classification rates . Second, we applied filtering based on identifying endogenous reads using databases built with k -mer 29 and consisting of sequences from target taxa and genomes related to the target taxa to the family level, to retain as many endogenous reads as possible (positive filtering). We predict that this strategy is well suited for low k -mer sizes, which show higher classification rates at shorter read lengths . The ancient human and dog reads were simulated as per the two scenarios in and . Indeed, Kraken2 is a metagenomic classifier, and it will not be able to differentiate between modern and ancient reads from the same species. Furthermore, while modern human contamination is a persistent problem in ancient human datasets, tools such as PMDtools have been shown to effectively remove them . Subsequently, for the negative filtering strategy (removing contaminants), the simulated datasets were classified using the ‘k2_custom’ database, built with k -mer 35, to select reads classified at the order rank of the species of interest—Primates for human and Carnivora for dog—or remaining unclassified, before performing mapping with bwa aln . For the positive filtering strategy (retaining endogenous data), we used ‘k2_canis_lupus ’ and ‘k2_human ’ built with k -mer 29 for the dog and human datasets, respectively, and only mapped classified reads. A positive outcome of filtering reads before mapping is shorter processing times. We see more than a 6-fold increase in processing speed ( , and ). Here, processing time includes the runtime to classify, filter, and map reads when filtering is applied, as opposed to only mapping time when no filtering is applied. Interestingly, despite identical total read counts across all endogenous fractions, mapping took longer for data with high contamination from vertebrate sequences (i.e. Scenario 1 in and ) as opposed to data with low vertebrate contamination (i.e. Scenario 2 in and ), when mapped solely using bwa aln . This suggests that contaminants from taxa closely related to the target species’ reference disproportionately and negatively impact mapping time. Using a larger composite reference in competitive mapping further extended mapping times, up to 1.6-fold . Combining competitive mapping with both Kraken2 filtering strategies greatly improved mapping time, up to 4.4-fold and 6.8-fold faster when compared to single-reference and competitive mapping, respectively . By default, Kraken2 databases are loaded into working memory and hence require, at minimum, free memory the same size as the database used. The ‘ k2_canis_lupus ’ ( k -mer 29), and ‘ k2_human ’ ( k -mer 29) databases are relatively small at 2.6 and 2.9 GB, respectively, making it possible to filter data even on personal machines. In contrast, the ‘k2_custom’ ( k -mer 35) database is substantially larger, at 166 GB, and is better suited for high-performance computing systems. Importantly, premapping filtering, regardless of the filtering strategy, greatly reduces the read volume for highly contaminated samples ( and ), which, in turn, facilitates a more efficient mapping process, sparing computational resources. Across all endogenous proportions tested, both Kraken2 filtering strategies consistently yielded more precise mapping compared to bwa aln mapping alone for both the high and low vertebrate contaminations ( and ; and ). Competitive mapping was also more precise compared to bwa aln mapping to a single reference, indicating it can accurately remove human contaminants from the faunal data, albeit with the longest run time. Since competitive mapping was only used to remove human contamination, other microbial and vertebrate contamination introduced remained in the mappings . Combining competitive mapping with Kraken2 filtering improved mapping accuracy beyond what either method achieved alone while significantly reducing the processing time required for competitive mapping . We summarized the precision and recall of each method by read length . Both competitive mapping and positive filtering showed the largest precision increase for shorter fragments (30–40 bp) compared to longer fragments when benchmarked against bwa aln mapping to a single reference. Interestingly, negative filtering achieved the highest precision improvement with fragments 41–50 bp in length. This is due to the k -mer 35 database used for negative filtering; shorter fragments <35 bp cannot be classified with this k -mer length. Additionally, since negative filtering retains unclassified reads for mapping with bwa aln to maximize endogenous read retention, the precision increase is less pronounced for 30–40 bp fragments than for 41–50 bp fragments. We observe a greater loss of endogenous reads when applying both Kraken2 filtering strategies compared to mapping directly to the reference, with losses up to 0.99%, and 3.8% for negative and positive filtering, respectively ( and ). For negative filtering, this lower recall stems from endogenous reads misclassified as other taxa, a known limitation of Kraken2 ’s probabilistic compact hash table, which can lead to false classifications. This loss is more pronounced with fragments 41–50 bp in length. In positive filtering, short fragments (<50 bp) were filtered out, as the k -mer 29 database struggled to classify these reads, despite the shorter k -mer length . Despite this loss of endogenous reads, the resulting precision increase from filtering improved f-measures for low endogenous samples with high vertebrate contamination (Scenario 1). In contrast, this same loss of endogenous reads lowered f-measure for samples with more endogenous data or less vertebrate contamination (Scenario 2) when compared to bwa aln mapping to a single reference . Competitive mapping, in comparison, showed the lowest loss of endogenous reads across all fragment lengths and endogenous fractions. Consequently, combining Kraken2 filtering with competitive mapping improved the f -measure beyond what Kraken2 filtering achieved individually. Filtering empirical data Finally, we mapped classified reads from 10 ancient grey wolf samples from Bergström et al . using the positive filtering strategy with the ‘ k2_canis_lupus ’ database built with k -mer 29 and including biallelic SNPs from 722 Canidae genomes . We see up to a 16-fold increase in processing speed (sample 367 with 0.4% endogenous DNA) when filtering with Kraken2 . The performance boost was observed across samples, with even those having a higher endogenous fraction of 20% and 50% or more experiencing a 2-fold and 1.3-fold increase in processing speeds, respectively. In line with our simulated results, we observed some loss of endogenous data . Reads that were mapped to the reference with a mapping quality score above 20 when no premapping metagenomic filtering was performed but were filtered out during metagenomic filtering were assigned taxonomies using BLAST and LCA algorithms in MEGAN. Most of these reads could not be assigned a taxonomy (41.80%–76.55%), with the majority of the classified reads assigned to the Order Carnivora (> 93% of classified reads). Of note, all samples had reads classified at the order Primates (0.03%–0.70%). Competitive mapping against CanFam6 and GRCh38.p14 references saw the reads map preferentially to the human reference. Since these reads were originally also mapped to the CanFam6 reference, we were able to compare the damage profiles of these reads mapping to the two references independently. Six out of the 10 samples (367, IN18-005, TU114, TU148, WOL-VAL-18A, and CANIS-ALAS-016) showed spurious C-to-T and G-to-A misincorporations when mapped to the dog reference, whereas no such misincorporations were observed when mapped to the human reference, suggesting these reads are modern human contamination . We also observed up to 3.58%, 0.82%, 0.27%, 0.26%, and 0.25% reads classified as bacteria, Artiodactyla, Rodentia, Chiroptera, and Lepidoptera, respectively . The authenticity of these classifications was not determined. We see no significant bias (|Z| < 3) introduced by filtering with Kraken2 when the database used to select endogenous reads contains variation information for Canids . However, it should be noted that samples CANIS-ALAS-016 and AL2744 observed low but significant reference bias when the Kraken2 database used to identify endogenous reads did not contain the alternate allele information ( , and ). Based on simulated ancient human and dog reads with no introduced contamination, we demonstrate that database composition significantly impacts classification accuracy . Databases consisting of only a single genome exhibit a bias towards classifying sequences as belonging to that genome’s taxonomy. Conversely, more complex databases may compromise taxonomic resolution—i.e. the ability to classify sequences at the lowest specific taxonomic rank, as previously observed by Nasko et al . . We observed a substantial reduction in dog reads misclassified as human from 3.17% to 0.12% and 52.52% to 3.34% for k -mer lengths 35 and 29, respectively, when using ‘ k2_human_dog ’, built from human and dog reference genomes, compared to ‘ k2_human ’, built with only the human reference . Using the combined database led to some reads classified as Boreoeutheria (0.26% and 1.77% for k -mer lengths 35 and 29, respectively), a classification rank that includes humans and dogs. Similarly, Canis sequences in ‘ k2_nt_20230502 ’ led to dog reads classified primarily as Canis or C. lupus , and the presence of primate genomes in the ‘ k2_custom ’ database led to the classification of human reads as Hominidae or Homininae . We also note a substantial number of human reads classified as ‘root’ (22.22%) when using ‘ k2_nt_20230502 ’. A ‘root’ classification is a result of the query sequence matching with viral and cellular organisms in the database . It is unclear if this is a result of contaminated viral sequences in the database or the presence of human endogenous retroviral sequences. Databases built with k -mer length 29 decrease the size of the database and the number of unclassified reads—i.e. reads that could not be assigned a taxonomy . Kraken2 assigns a classification to a read-only when the read contains unique l -mers (a subsequence of length l , where l ≤ k ) that exactly match the l -mer sequences in the database . Hence, databases built with a k -mer length of 29 assign taxonomies to shorter reads and reads with misincorporations due to damage better than the larger default k -mer length . However, this comes at the price of lower taxonomic resolution, with increased ancient dog and human reads classified as ‘cellular organism’ or ‘Eukaryota’ (21.38% and 22.31% for dog and human reads, respectively, using ‘ k2_custom ’), as well as increased false classifications as the simulated reads were misclassified as other vertebrate species in the database (>10% in dog reads using ‘ k2_custom ’). ​​​​. Filtering before mapping using metagenomic classifiers primarily aims to retain as many endogenous reads as possible while discarding contaminating sequences. However, sensible taxonomic classification is highly dependent on taxa represented in the database, regardless of database complexity. As we showed above, the rate of classification bias is driven by the k -mer length used to build the database, with lower specificity due to a shorter k -mer length leading to more aberrant classifications. Therefore, we investigated two strategies that take into account the strengths and drawbacks of different types of databases. First, we applied filtering based on identifying contaminants using databases built with k -mer 35 and consisting of sequences from the target taxa and as many, as possible, contaminating genomes, to remove as many contaminating sequences as possible (negative filtering). We predict that this strategy is well suited for when the database is built with larger k -mer sizes, which show substantially lower false classification rates . Second, we applied filtering based on identifying endogenous reads using databases built with k -mer 29 and consisting of sequences from target taxa and genomes related to the target taxa to the family level, to retain as many endogenous reads as possible (positive filtering). We predict that this strategy is well suited for low k -mer sizes, which show higher classification rates at shorter read lengths . The ancient human and dog reads were simulated as per the two scenarios in and . Indeed, Kraken2 is a metagenomic classifier, and it will not be able to differentiate between modern and ancient reads from the same species. Furthermore, while modern human contamination is a persistent problem in ancient human datasets, tools such as PMDtools have been shown to effectively remove them . Subsequently, for the negative filtering strategy (removing contaminants), the simulated datasets were classified using the ‘k2_custom’ database, built with k -mer 35, to select reads classified at the order rank of the species of interest—Primates for human and Carnivora for dog—or remaining unclassified, before performing mapping with bwa aln . For the positive filtering strategy (retaining endogenous data), we used ‘k2_canis_lupus ’ and ‘k2_human ’ built with k -mer 29 for the dog and human datasets, respectively, and only mapped classified reads. A positive outcome of filtering reads before mapping is shorter processing times. We see more than a 6-fold increase in processing speed ( , and ). Here, processing time includes the runtime to classify, filter, and map reads when filtering is applied, as opposed to only mapping time when no filtering is applied. Interestingly, despite identical total read counts across all endogenous fractions, mapping took longer for data with high contamination from vertebrate sequences (i.e. Scenario 1 in and ) as opposed to data with low vertebrate contamination (i.e. Scenario 2 in and ), when mapped solely using bwa aln . This suggests that contaminants from taxa closely related to the target species’ reference disproportionately and negatively impact mapping time. Using a larger composite reference in competitive mapping further extended mapping times, up to 1.6-fold . Combining competitive mapping with both Kraken2 filtering strategies greatly improved mapping time, up to 4.4-fold and 6.8-fold faster when compared to single-reference and competitive mapping, respectively . By default, Kraken2 databases are loaded into working memory and hence require, at minimum, free memory the same size as the database used. The ‘ k2_canis_lupus ’ ( k -mer 29), and ‘ k2_human ’ ( k -mer 29) databases are relatively small at 2.6 and 2.9 GB, respectively, making it possible to filter data even on personal machines. In contrast, the ‘k2_custom’ ( k -mer 35) database is substantially larger, at 166 GB, and is better suited for high-performance computing systems. Importantly, premapping filtering, regardless of the filtering strategy, greatly reduces the read volume for highly contaminated samples ( and ), which, in turn, facilitates a more efficient mapping process, sparing computational resources. Across all endogenous proportions tested, both Kraken2 filtering strategies consistently yielded more precise mapping compared to bwa aln mapping alone for both the high and low vertebrate contaminations ( and ; and ). Competitive mapping was also more precise compared to bwa aln mapping to a single reference, indicating it can accurately remove human contaminants from the faunal data, albeit with the longest run time. Since competitive mapping was only used to remove human contamination, other microbial and vertebrate contamination introduced remained in the mappings . Combining competitive mapping with Kraken2 filtering improved mapping accuracy beyond what either method achieved alone while significantly reducing the processing time required for competitive mapping . We summarized the precision and recall of each method by read length . Both competitive mapping and positive filtering showed the largest precision increase for shorter fragments (30–40 bp) compared to longer fragments when benchmarked against bwa aln mapping to a single reference. Interestingly, negative filtering achieved the highest precision improvement with fragments 41–50 bp in length. This is due to the k -mer 35 database used for negative filtering; shorter fragments <35 bp cannot be classified with this k -mer length. Additionally, since negative filtering retains unclassified reads for mapping with bwa aln to maximize endogenous read retention, the precision increase is less pronounced for 30–40 bp fragments than for 41–50 bp fragments. We observe a greater loss of endogenous reads when applying both Kraken2 filtering strategies compared to mapping directly to the reference, with losses up to 0.99%, and 3.8% for negative and positive filtering, respectively ( and ). For negative filtering, this lower recall stems from endogenous reads misclassified as other taxa, a known limitation of Kraken2 ’s probabilistic compact hash table, which can lead to false classifications. This loss is more pronounced with fragments 41–50 bp in length. In positive filtering, short fragments (<50 bp) were filtered out, as the k -mer 29 database struggled to classify these reads, despite the shorter k -mer length . Despite this loss of endogenous reads, the resulting precision increase from filtering improved f-measures for low endogenous samples with high vertebrate contamination (Scenario 1). In contrast, this same loss of endogenous reads lowered f-measure for samples with more endogenous data or less vertebrate contamination (Scenario 2) when compared to bwa aln mapping to a single reference . Competitive mapping, in comparison, showed the lowest loss of endogenous reads across all fragment lengths and endogenous fractions. Consequently, combining Kraken2 filtering with competitive mapping improved the f -measure beyond what Kraken2 filtering achieved individually. Finally, we mapped classified reads from 10 ancient grey wolf samples from Bergström et al . using the positive filtering strategy with the ‘ k2_canis_lupus ’ database built with k -mer 29 and including biallelic SNPs from 722 Canidae genomes . We see up to a 16-fold increase in processing speed (sample 367 with 0.4% endogenous DNA) when filtering with Kraken2 . The performance boost was observed across samples, with even those having a higher endogenous fraction of 20% and 50% or more experiencing a 2-fold and 1.3-fold increase in processing speeds, respectively. In line with our simulated results, we observed some loss of endogenous data . Reads that were mapped to the reference with a mapping quality score above 20 when no premapping metagenomic filtering was performed but were filtered out during metagenomic filtering were assigned taxonomies using BLAST and LCA algorithms in MEGAN. Most of these reads could not be assigned a taxonomy (41.80%–76.55%), with the majority of the classified reads assigned to the Order Carnivora (> 93% of classified reads). Of note, all samples had reads classified at the order Primates (0.03%–0.70%). Competitive mapping against CanFam6 and GRCh38.p14 references saw the reads map preferentially to the human reference. Since these reads were originally also mapped to the CanFam6 reference, we were able to compare the damage profiles of these reads mapping to the two references independently. Six out of the 10 samples (367, IN18-005, TU114, TU148, WOL-VAL-18A, and CANIS-ALAS-016) showed spurious C-to-T and G-to-A misincorporations when mapped to the dog reference, whereas no such misincorporations were observed when mapped to the human reference, suggesting these reads are modern human contamination . We also observed up to 3.58%, 0.82%, 0.27%, 0.26%, and 0.25% reads classified as bacteria, Artiodactyla, Rodentia, Chiroptera, and Lepidoptera, respectively . The authenticity of these classifications was not determined. We see no significant bias (|Z| < 3) introduced by filtering with Kraken2 when the database used to select endogenous reads contains variation information for Canids . However, it should be noted that samples CANIS-ALAS-016 and AL2744 observed low but significant reference bias when the Kraken2 database used to identify endogenous reads did not contain the alternate allele information ( , and ). Our study presents evidence that premapping filtering using Kraken2 not only optimizes the usage of computational resources by greatly reducing mapping time but also improves the precision of mapped aDNA reads. This is particularly evident in datasets involving samples with very low levels of endogenous sequences and high contamination from sequences closely related to the target species. By implementing a positive filtering strategy to retain putative endogenous reads using a Kraken2 database built with k -mer 29 consisting of reference and alternate sequences, and genomes from closely related taxa to the species of interest, we achieve a streamlined process that is resource-efficient and suitable for a wide array of computational environments, including personal machines due to the database requiring <5 GB of memory to run. A more thorough approach is negative filtering to remove putative contaminants. This strategy utilizes a comprehensive database built with k -mer 35 and encompassing a broad spectrum of contaminants and affords enhanced precision and recall. The increased memory requirements for a larger database means this approach is more suitable for researchers who have access to high-performance computing resources. However, the feasibility of this strategy depends on how well the database represents the full array of potential contaminants in a particular dataset because the filtering depends on how many contaminants can be classified and hence removed. Identifying environmental microbes is a limitation of empirical data since most reference databases focus on human pathogens or microbial species that are of interest to humans . However, in recent years, ancient and modern environmental microbiomes have increasingly been characterized . The development of resources like the Genomic Taxonomy Database is helping bridge the gap between genomics and microbial taxonomy, improving microbial characterization across diverse environments, including contamination present in ancient datasets. Our findings suggest that the choice of strategy should be guided by the available resources and specific priorities. The loss of some negligible amounts of endogenous data is an inherent limitation of the Kraken2 filtering approach, but this trade-off is balanced by gains in mapping efficiency and precision. Endogenous read loss occurs either through misclassifications—more common in negative filtering—or through unclassified reads in the positive filtering strategy. Misclassifications stem from the probabilistic compact hash table used by Kraken2 , which, though memory-efficient compared to a standard hash table, sacrifices some specificity and accuracy . Likewise, unclassified reads in positive filtering are also because of this, with shorter, damaged DNA fragments being especially affected . We also caution against filtering by identifying endogenous reads when studying extinct species for which no reference genomic resources exist, as it might impact the retrieval of endogenous reads and cause the reads to be biased towards the reference alleles in the database. For well-studied species such as humans and dogs, we propose adding alternate allele information from large genomic studies to better capture variation in the sequence data. Furthermore, the processing time of ancient hominin genomes enriched with the 1240 k SNP panel could be greatly reduced if a database with the human reference and the expected alternate allele information captured by the panel were used to select human reads before mapping. Our findings coincide with the increasing use of pangenomic approaches for genomic analysis, such as adding alternate allele information to reduce reference bias during mapping , building databases from sequences from pangenomic projects for improved host removal from clinical metagenomic data , and novel tools such as Euka that use pangenomic graphs for metagenomic classification—although currently, Euka databases are restricted to mitochondrial genomes of tetrapods and arthropods. We also highlight that using databases with microbial and human sequences to classify human DNA can lead to some human reads being classified at the kingdom or domain rank as previously observed because some microbial sequences are contaminated with human DNA. It is thus imperative to benchmark databases against the project’s objectives to mitigate these issues and their effect on data interpretation, which aligns with the growing body of literature that underscores the importance of benchmarking metagenomic classifiers in different contexts . In recent years, there have been increased efforts in characterizing and removing contaminated sequences in reference databases to reduce erroneous interpretations of metagenomic datasets . We propose that an approach that includes classification-based filtering has the potential to refine data processing and improve overall mapping data quality. We anticipate that continued improvements in metagenomic classifiers and reference databases that can identify environmental taxa will result in increased accuracy of our proposed filtering approaches and reduce data loss, paving the way for more precise reconstructions of ancient genomes. Key Points Contamination is a major challenge in paleogenomics. Computational methods are essential to distinguish between endogenous and contaminant sequences. We propose a new workflow relying on a metagenomic classifier to filter out contaminants prior to aligning sequences to a reference sequence. We provide clear strategies to build the reference database and finetune the parameters to optimize the classification. Our workflow significantly reduces the computational resources and overall runtime while improving mapping precision and downstream analyses. Contamination is a major challenge in paleogenomics. Computational methods are essential to distinguish between endogenous and contaminant sequences. We propose a new workflow relying on a metagenomic classifier to filter out contaminants prior to aligning sequences to a reference sequence. We provide clear strategies to build the reference database and finetune the parameters to optimize the classification. Our workflow significantly reduces the computational resources and overall runtime while improving mapping precision and downstream analyses. BIB_supplementary_document_revised_bbae646 BIB_supplementary_tables_bbae646
Dissociation of impulsive traits by subthalamic metabotropic glutamate receptor 4
2a1020d5-13e2-49c0-99f5-9f01f682c64d
8803315
Physiology[mh]
Successful environmental interactions require fast, but conditional execution of appropriate behavioral responses. Impulse control adapts the balance between action and action inhibition to optimize behavioral outcomes. The past decade has seen a wealth of advances in our knowledge of the neuronal basis of impulsive action. Functional magnetic resonance imaging (fMRI) studies in humans and behavioral studies in animal models have delineated key areas in the brain that control impulsive action . Several interconnected brain areas, including the prefrontal cortex , anterior cingulate cortex , insular cortex , amygdala , subthalamic nucleus (STN) , and nucleus accumbens , among others , control various aspects of impulsive behavior. These regions process multiple brain functions related to impulsivity, such as reward processing, decision-making, and motor execution , in addition to their canonical functions. Consequently, they encode behavioral parameters of impulsive choices as discrete features within broader patterns of neuronal activity . Genetically, many genes have been associated with distinct forms of impulsivity and impulsivity-related disorders, such as attention deficit hyperactivity disorder (ADHD) , which mainly affect serotonergic and dopaminergic systems and their glutamatergic modulation . As with many cognitive functions, impulsive actions underlie a natural variance expressed as a specific behavioral trait in both humans and rodents . This trait impulsivity can be defined as a preference for immediate rewards over larger delayed rewards. In this regard, moderate trait impulsivity involves taking calculated risks or pursuing unexpected outcomes to maximize overall gain or rewards. Excessive trait impulsivity involves unreasonably risky, premature behaviors resulting in negative consequences, such as punishments . According to the manual for assessment and diagnosis of mental disorders, DSM-V, pathological trait impulsivity is a key diagnostic indicator of impulse control disorders (ICDs), which have attracted significant public health interest in recent years. Moreover, excessive impulsivity contributes significantly to the pathology of mood disorders, drug abuse, and addiction, as well as ADHD and borderline personality disorder . Understanding the neuronal basis of trait impulsivity, thus, is key to identifying risk factors and progression toward these conditions. There has been substantial progress in characterizing the diverse neuronal circuitry and mechanisms that control impulsive action per se, but we know much less about the neuronal basis underlying natural variation in impulsivity. This manifests as stably expressed individual behavioral traits, programmed within impulsive circuitry (above), and separated from other brain functions. At the molecular level, the expression of a behavioral trait is shaped by genetic/epigenetic factors . The mechanisms that link individual genes and epigenetic factors to specific modulation of neuronal activity and trait impulsivity are poorly understood, however. To address these mechanisms, we used a free-moving, Go/No-Go (GNG) task to study the natural variance in experimental cohorts. This task involves a cued preparatory phase followed by two cues, to which the subjects must either respond rapidly or withhold a response in order to gain a reward. This allowed us to systematically chart hotspots for trait impulsivity and investigate their genetic modulation. This strategy identified the STN as a site where metabotropic glutamate attenuates neural activity and impulsive choices, without affecting general motor output. From a translational perspective, this promised insight into biomedically relevant mechanisms underlying trait impulsivity. More generally, this study explored how modulatory genes may dissociate specific cognitive traits from other brain functions multiplexed within the same circuitry. Brain-wide rs-fMRI associates STN with trait impulsivity To screen for brain circuitry underlying trait impulsivity, we modeled variant impulsive behaviors in isogenic strains as proxy . Compared to more complex scenarios addressed by between strain comparisons, this strategy was designed to identify basic functional differences emerging from a single genetic and neuroanatomical context, here C57BL/6. We chose a free-moving variant of a GNG task with signaled trials. This assay allowed us to monitor a broad range of behavioral characteristics within a given task session, from general motor behavior to several impulsivity-related parameters, which was not possible with head-fixed or port-fixed variants of this task . Each trial of the task was automatically initiated by a light cue that signaled the beginning of the precue period . Responses during this period were recorded (precue response rate), but neither rewarded nor penalized. After a randomized time period, an auditory cue was presented signaling either a Go or a No-Go (NG) trial. Correct reresponses to both Go (response) and NG (withhold) cues were rewarded. Conversely, incorrect Go and NG trials were noted as omission and false alarm (FA), respectively, and no reward was delivered (see Materials and methods for details). Precue response rate and FAs provide two important but distinct parameters of impulsivity in this task. FA responses in the port during the presentation of the NG cue indicate the capacity to restrain prepotent motor responses (called stopping impulsivity), indicating cognitive executions; they are thus an index of failure in cue-related withhold behavior. Precue responses, by contrast, reflect impulsive action in the preparatory phase of the task, indicating how long an animal is willing to withhold a response (called waiting impulsivity) before a cue appears . These forms of impulsivity may have different translational implications for psychiatric symptoms, as their involvement in pathobiology for obsessive compulsive and addiction-related conditions, respectively, has been shown . To map hotspots in trait impulsivity within brain networks, we first trained an initial cohort of mice in our cued GNG task . We then set a behavioral contrast between the top 75th percentiles and bottom 25th percentiles on a compound impulsivity measure of both precue response rate and incorrect NG responses (FA) . This served as a boundary for high and low impulsive (HI and LI) animals, respectively , which expressed stable impulsive traits over the course of at least three sessions ( ,ii). To isolate variance in impulsive behavior, independent of motivational state or performance in task execution, HI/LI groups were filtered for similar levels of correct Go responses (>95%, and ) and total number of licks at reward delivery . Both the HI and LI groups were analyzed by resting-state functional magnetic resonance imaging (rs-fMRI), as straightforward means to map differences in brain network connectivity – in this case, differences that correlate with HI/LI behavioral traits. The goal of the fMRI analysis was, first and foremost, to highlight the top ranked nodes for impulsive traits. To establish such a ranked list reliably, we used an experimental design suitable for small sample sizes to limit the number of animals subjected to behavioral handling and the fMRI operating time and associated costs. Such a statistically low-powered screen is liable to false negatives; however, this design and the following analyses were tailored to identify the top-most ranked nodes. To trace the most prominent changes in the general brain network, we screened seed-wise for the most affected nodes. To this end, we rank-ordered node connectivity differences between HI and LI groups using group means to reduce effects from individual animal variance in low sample size settings . Specifically, after computing the brain-wide functional connectivity in HI and LI animals for each region (node), we determined differences in connectivity between the two groups, which were then evaluated for statistical significance. For the comparison of HI and LI groups, these subtracted correlations report effect sizes and rank more intuitively than simple p-values, in particular given the low sample size. We note that this data could be extended to delineate the associated specific impulsivity functional subnetworks (edges) in a follow-up graph theoretical workup of our data. Our rs-fMRI analysis was designed to locate hotspots (i.e., nodes) related to impulsive traits. This analysis revealed 13 (of 51) brain nodes that differed in their functional connectivity between HI and LI groups . In HI animals, several brain nodes had higher functional connectivity, when compared to their LI littermates; these included the superior central raphe nucleus (CSm), midbrain reticular nucleus (MRN), periaqueductal gray (PAG), agranular insular area (AI), infralimbic cortex (ILA), and nucleus accumbens (ACB). Previous studies have implicated many of these areas in modulating impulsive behavior, goal-directed actions, and in responding to rewards . In LI animals, by contrast, the STN had the highest functional connectivity when compared to the HI littermates, followed by the zona incerta (ZI) and amygdala (AMY) . Among these regions, the STN is a key region embedded in the basal ganglia known as the ‘indirect pathway of movement’. Thus, it is ideally situated to control motor activity in impulsivity-related tasks in animals and humans . To contrast functional networks for trait impulsivity with those from other behavioral parameters, we analyzed the rs-fMRI data of our cohort for brain regions involved in other behavioral parameters: the total distance traveled in the task and the total number of licks during the task, which indicates reward collection and is a proxy for the motivational state of the animals . Analysis of the effect sizes showed that STN ranked top for trait impulsivity but scored considerably lower for motor drive ( and ); the ZI and AMY, likewise, ranked high for trait impulsivity and low for motor drive. The MRN and PAG, by contrast, showed strong differences for both impulsivity and distance traveled, whereas they showed no significant effect for licks. The CSm showed a high correlation for HI parameters but the opposite effect for licks. Lastly, we found no brain region significantly different in all three parameters. Taken together, the rs-fMRI screen identified the STN as a modulatory hub with comparably high specificity for trait impulsivity. This suggests that the STN, in addition to its canonical role in basic motor control, is also the main site of naturally occurring variance in trait impulsivity. STN differentially encodes impulsive features and motor states To dissect further the specific roles of the STN in impulsivity and motor function, we performed in vivo extracellular recordings during the GNG task . We found mild responses in the STN in response to the light cue that signaled the beginning of the precue period , indicating specific activity in the waiting period of the task. As expected, we also found responses to the Go sound onset , but we saw no significant response to the auditory cue signaling an NG trial, whether the animals responded with an FA or with a correct withhold . Consistent with the role of the STN in reward processing , several units modulated their activity upon reward . Surprisingly, reward-related information is processed by units separate from those potentially gating Go responses . From these data, we conclude that responses to the Go cue and to the reward are encoded by separate channels in the STN. Using distinct channels may uncouple reward value from response control during conditional responding. Our analysis also revealed STN responses coupled to the precue, Go, and FA pokes , and we detected unit populations within the STN that responded to the onset of immobility and movement . These immobility onsets were spread across various task periods; the majority occurred either in the precue or ITI periods of the task, and fewer than 5% occurred during presentation of the NG sound . These data indicate that the STN encodes preparatory withholding, akin to behavioral inhibition, rather than cognitive control of Go vs. NG cue-dependent behavioral discrimination . We next explored the encoding of these features in the principal component analysis (PCA) space of the population vector activity in the STN. We found distinct vector paths for subsets of these parameters, which suggests rather rigid separation of some stimulus and behavior variables. Responses to the Go sound and reward onset had similar trajectories in PCA space , with weak general response to both light and NG sound onsets. Moreover, cell-by-cell analysis revealed that these states were encoded across STN units and each unit carried multiple signals . Precue poke trajectories (and to a lesser extent FA pokes) were separated from the trajectories for immobility and Go pokes . These data indicate that the STN encodes features of impulsivity, particularly those associated with precue pokes, as a behavioral state distinct and dissociable from Go responses. The sets of units that responded to precue pokes with decreased firing rates, varied in their firing patterns for other behavioral variables, for example, non-responding, increasing, or decreasing upon Go poke or immobility onsets . Taken together, these findings suggest multiplexed encoding of impulsivity and motor features in the GNG task at the unit level. Since many of the investigated units were bound to a precue poke event, we examined whether there might be a correlation between impulsivity behavioral events in this task period and STN electrophysiological activity. As a proxy for an animal split on impulsivity level, we grouped the pooled precue periods into four categories with increasing number of behavioral impulsivity events – the precue pokes . The unit population suppressed during the precue poke event significantly decreased its average firing rate with increased incidence of behavioral events (negatively correlated with precue pokes). However, the activated population showed the converse effect (positively correlated with precue pokes). Importantly, the mean activity of the negatively correlated population was higher in periods without precue pokes compared with units activated by this behavioral event, strongly suggesting that this population acts as an inhibitory gatekeeper for this behavior. Local units are bound by time-locked oscillations that organize functional coupling to intra-STN and mesoscale brain networks. Specifically, beta-range coupling in the STN is associated with action inhibition , whereas gamma-range coupling is associated with action execution . To study the oscillatory activity during the GNG task, we compared event- (precue, Go and NG sound periods) and behavior- (immobility) related spectral powers in the task . High theta power during immobility might reflect pre-decision-making for action selection . As expected from the role of STN in action inhibition, the immobility period showed the strongest LFP power in the beta band ( inset), while the highest gamma power was bound to the Go sound cue ( inset), reflecting beta- and gamma-related behavioral inhibition and execution, respectively. To investigate these patterns in more detail, we used time-resolved power spectrograms centered around the onset of events and behaviors. This analysis revealed action-locked STN activity probably linked to reward expectation/consumption . This signal was absent from NG-cued and non-reinforced FA pokes . By contrast, immobility onset correlated with an overall increase in theta power, followed by a drop of beta and gamma activity . Unlike the ITI periods, trial onsets significantly reduced theta and beta bands, suggesting behavioral disinhibition and lower action thresholds during the precue period . This pattern was less pronounced during the NG sound, indicating a bias toward waiting impulsivity in the STN . Consequently, by analyzing the power spectra around precue pokes, we found a strong decrease in the theta, beta, and gamma bands prior to a precue poke , suggesting that neuronal decoupling of STN from its intra-STN and mesoscale networks results in impulsive choice. Overall, this picture recapitulates the decoupling of the STN from global networks in HI animals . Optogenetic perturbation of the STN modulates impulsivity To assess the gatekeeping function of the STN in the GNG task, we used an optogenetic approach. By using adeno-associated viral vectors, we expressed the light-sensitive opsins channelrhodopsin-2 (ChR2) or archaerhodopsin (Arch) in the STN and implanted an optical fiber above the STN for light activation . Prior to behavioral testing, we functionally validated this approach by patch clamp recordings to measure activation and inhibition of the STN . The temporal resolution of optogenetics allowed us to investigate STN function specifically in the precue and cue phases of the task. Inhibition of the STN during the precue phase significantly increased the precue response rate, whereas activation had the opposite trend . These opposite effects strongly suggest bidirectional control of impulsivity by the STN. Under this stimulation regime, we also observed post-laser effects: the latency to respond to both Go and NG cues was prolonged in the manipulations, however, it did not strongly impact the number of responses in either the Go or NG trials . As expected, cue-bound optogenetic manipulation of STN function did not affect the precue response rate , however, it did reduce the fraction of Go responses in both the ChR2 and Arch groups , with alterations in latency to respond . We speculate that this unidirectional effect indicates that activation or inhibition of the STN impacts general task performance (as measured by the Go parameter). Consistent with this, STN lesions in animals and deep brain stimulation (DBS) in humans altered the number of cue responses . Also, studies in which STN activity was increased by blocking GABA-ergic inputs resulted in abnormal movements and decreased behavioral performance . Optogenetic activation or inhibition of STN function had no effect in NG trials on either response numbers or their latency , thus dissociating precue effects on waiting from cue effects stopping impulsivity. This suggests that, at lower intensity regimes (see below, ), STN function mainly affects waiting impulsivity and, to a lesser extent, cue-related responses, even though we found precue responses and FA responses were both represented in the STN ( , FA-related trajectories). Taken together, our findings from optogenetics and electrophysiological recordings show that the STN encodes and controls impulsive action, especially related to precue waiting impulsivity. Pharmacological manipulation of mGlu4 interacts with trait impulsivity at STN Altered glutamatergic neurotransmission is implicated in the pathobiology of impulsivity-related mental disorders . Moreover, pharmacological modulation of both ionotropic and metabotropic glutamate receptors affects impulsivity . The metabotropic glutamate receptor 4 (mGlu4) emerged as a strong candidate, as it is widely expressed in the STN and the globus pallidus, which are both elements of the indirect pathway of movement in the basal ganglia . Presynaptic activation of mGlu4 reduces, but does not abrogate, neurotransmitter release, making fast glutamatergic neurotransmission accessible to neuromodulatory therapeutic intervention. In fact, positive allosteric modulators (PAMs) of mGlu4 show promising results in pre-clinical and clinical trials as potential therapeutic agents to reverse motor dysfunction in Parkinson’s disease (PD) (reviewed in ; ), but can increase impulsivity in rats . We hypothesized that mGlu4 can specifically gate impulsive action via glutamatergic modulation of STN output and in consequence the expression of impulsive traits. To investigate the possible role of mGlu4 in impulsivity, we used a PAM specific for this receptor, 4-(( E )-styryl)-pyrimidin-2-ylamine (mGlu4 PAM; ; ). We delivered mGlu4 PAM to HI/LI animals and assayed for trait-dependent modulation of impulsivity . This PAM increased the precue response rate , without overtly affecting FA ( , ,C left) or Go responses ( , ,C right) in the GNG task predominantly in LI animals. Thus, the effect appears to depend on baseline trait impulsivity. To investigate the possible neuromodulatory interaction of mGlu4 with trait impulsivity in the STN circuitry, we treated HI and LI animals with mGlu4 PAM and then evaluated its effects throughout the brain by rs-fMRI. As for the comparison of HI and LI animals , we used a node-centric approach to rank-order hotspots of mGlu4 x HI/LI interaction. Again, rather than reporting p-value statistics, we reported the data as F values of the interactions and filtered for significance of the small sample size. Using this approach, we could directly map the interaction between mGlu4 modulation of brain functional connectivity and impulsive behavioral traits. As expected from the broad distribution of mGlu4 in the limbic system, mGlu4 PAM affected many brain areas , resulting from drug action on this distributed mGlu4 expression. Thus, this method should be sensitive to monitor the interaction of mGlu4 PAM treatment and HI/LI animals, which should reveal hotspots relevant for mGlu4 modulation of trait impulsivity. This node-wise analysis of functional connectivity ranked the STN as the brain node with the highest impulsivity x treatment interaction . Rank analysis of this interaction, together with group splits for distance traveled and licks , showed that the effect on the STN is specific for impulsivity . Both the AMY and ZI ranked much lower in the interaction score for impulsivity than in the HI/LI functional connectivity split, indicating another role for mGlu4 in these structures. Pharmacological activation of mGlut4 modulates neuronal activity in the STN and SNr To assess how the mGlu4 PAM network effects are reflected at the cellular level, we compared STN activity between vehicle and mGlu4 PAM treatment. The mGlu4 PAM reduced the fraction of units inhibited upon precue poke when compared to the controls , and this fraction correlated negatively with waiting impulsivity (putative behavioral gating units; , D–E). We observed no such effect in units gating the onset of immobility . These data suggest that mGlu4 PAM specifically antagonizes STN activity to gate impulsive behavior. A drop in LFP power in the STN correlates with precue pokes . Thus, we reasoned that mGlu4 PAM should decrease LFP power in the STN. Consistent with the elevated precue response rateupon treatment with mGlu4 PAM , LFP power in the beta and gamma band was reduced during the precue phase when compared to the vehicle control ( , inset). In the vehicle control, STN LFP power dropped prior to precue pokes ( left), as observed above (baseline, ), whereas after mGlu4 PAM treatment beta and gamma power were uncoupled from precue pokes ( right). This mGlu4 PAM-mediated uncoupling was most evident when we compared the non-normalized absolute LFP power preceding precue pokes (left axes) in the vehicle control and after mGlu4 PAM treatment, and when analyzing the variance in raw LFP power around precue pokes (right axes, ). This showed a significant reduction in STN LFP modulation during mGlu4 PAM treatment upon impulsive action. We propose that these changes reflect intra-STN modulation of STN neural activity and locally generated LFP by mGlu4. The mGlu4 receptor is expressed mainly on pallido-subthalamo and subthalamo-nigral synapses , making it well situated to gate neuronal processing through the STN , either by acting on GP inputs or at STN outputs to the substantia nigra pars reticulata (SNr). Furthermore, mGlu4 PAMs modulate subthalamo-nigral pathways ex vivo . Thus, we hypothesized that mGlu4 PAMs might modulate STN outputs to affect impulsivity in vivo. To test this hypothesis, we performed an electrophysiological characterization of SNr . Unlike our findings with the STN, we saw no responses in the SNr coupled to light onset ; however, we found event-coupled units to Go sound , NG sound followed by FA , rewards , correct withhold , and units bound to behavioral events such as to precue pokes , Go pokes , FA pokes , immobility , and movement onset . In contrast to neurons in the STN, the firing rate of SNr neurons remained constant when trials were split by the number of precue responses ( , see for comparison). Moreover, when compared to the population in the STN, the SNr population activity in PCA space showed less separated paths for behavior and for stimulus-centered events . As in the STN, SNr cell-by-cell analysis revealed differential encoding of precue pokes and Go sound responses, but to a lesser degree than in STN , as for event onsets . This difference between areas might reflect the functional proximity of the SNr to basal ganglia motor output, where impulsivity and motor commands may separate less than they do in the STN, which encodes traces of higher cognitive decisions. To investigate this possibility at the SNr network level, we inspected event- (precue period, Go, and NG sound periods) and behavior- (immobility) related spectral LFP in our task. SNr beta power increased during precue phases and reflects beta band synchrony between the STN and SNr, generally associated with behavioral inhibition . Consistent with the proximity of the SNr to basal ganglia motor output, SNr gamma power transiently increased preceding execution of an action . The increased beta power seen upon the NG cue reflects SNr behavioral inhibition recruited by the withhold signals in the task . The strong motor binding of SNr oscillatory activity (as opposed to STN activity, ) suggests SNr modulation by sources other than the STN . Since the mGlu4 receptor is known to modulate synaptic transmission between the STN and SNr, we reasoned that the effects of mGlu4 PAM in the SNr should be particularly pronounced in the SNr. Indeed, mGlu4 PAM treatment modulated SNr recruitment to precue pokes ( ,C), but not its binding to basic motor output ( ,C). At the level of LFPs, we observed only a minor reduction in overall LFP power upon mGlu4 PAM treatment . However, this was paralleled with a significant uncoupling of theta and beta band coupling to precue pokes in the SNr , and in the variance of raw LFP power around precue pokes . The LFP power spectra and their modulation by mGlu4 PAM ( , Ei) explain the dominant interaction of STN with impulsive choice, compared to SNr and its interaction with mGlu4 modulation . Together, these data suggest that mGlu4 modulates the control of impulsive action by the STN and SNr, without altering activity bound to motor output. STN mGlu4 dissociates impulsive traits from motor function We found that the STN is a hotspot for the interaction of mGlu4 PAM activity with trait impulsivity, strongly suggesting that mGlu4 expression in the STN modulates this behavioral trait directly. We examined animals with the HI or LI phenotype (similar group separation to our rs-fMRI experiment) in our GNG task and performed high-resolution in situ hybridization to locate areas of mGlu4 expression in samples from both groups . Quantification of the in situ hybridization signal revealed a statistically significant difference in Grm4 mRNA expression in the STNs of HI and LI animals, but not in the other brain regions we analyzed, including the ZI, thalamus, and AMY . As expected, the predominant signal in the perinuclear portion and in VGlut + neurons (detected by a probe mix for Slc17a6 and Slc17a7 ; ) covaried most strongly with the HI and LI phenotypes. This suggests that locally synthesized mGlu4 is closely linked to trait impulsivity (as opposed to STN afferents also expressing mGlu4). To study the function of mGlu4 in trait impulsivity, we used adeno-associated viral vectors expressing short hairpin (sh)RNAs against the receptor (mirE-mGlu4) . First, we tested the efficacy of the vectors in Neuro-2a cell culture, a cell type known to express mGlu4 endogenously . The shRNAs effectively lowered mGlu4 protein levels, as assayed by Western blot . We injected the validated vectors into the STN of HI and LI animals . HI animals injected with mirE-mGlu4 showed a persistent decrease in precue response rate when compared with the control group post-surgery ; this effect was absent in LI animals . As with pharmacological inhibition, we saw no changes in FA or Go responses . Taken together, the data from our optogenetic, pharmacological, mGlu4 expression, and silencing experiments show complementary phenotypes in re-occurring patterns in HI and LI animals. These are consistent with mGlu4 in the STN modulating (waiting) impulsivity traits. mGlu4 manipulation in the STN does not affect general motor function To compare the consequences of modulation of STN activity on impulsive behavior with those on general motor function , we performed a series of open-field experiments using various means to manipulate STN function, as above. As expected, high-power optogenetic activation of the STN suppressed motor output . However, low-power optogenetic activation of the STN, as used in the GNG task and akin to pharmacologically blocking GABA transmission to the STN , did not alter motor behavior. Optogenetic inhibition, which might be expected to be less effective than complete STN lesions , modulated impulsivity in the GNG task independent of general motor effects . Furthermore, both pharmacological and shRNA-mediated inhibition of mGlu4 activity affected impulsivity without affecting general motor behavior. Moreover, in a separate, untreated cohort of animals, we found no significant correlation between distance traveled in OF and either precue response rate or percent of trials with FA . This finding is consistent with reports that general motor activity does not clearly correlate with impulsivity in different mouse strains . Thus, we conclude that weak manipulations that specifically target impulsivity but not motor functions lead to the observed phenotype in the GNG task. We speculate that this differential behavioral effect reflects the separation of motor and impulsivity-related stimuli and behaviors in different regimes of STN activity . To screen for brain circuitry underlying trait impulsivity, we modeled variant impulsive behaviors in isogenic strains as proxy . Compared to more complex scenarios addressed by between strain comparisons, this strategy was designed to identify basic functional differences emerging from a single genetic and neuroanatomical context, here C57BL/6. We chose a free-moving variant of a GNG task with signaled trials. This assay allowed us to monitor a broad range of behavioral characteristics within a given task session, from general motor behavior to several impulsivity-related parameters, which was not possible with head-fixed or port-fixed variants of this task . Each trial of the task was automatically initiated by a light cue that signaled the beginning of the precue period . Responses during this period were recorded (precue response rate), but neither rewarded nor penalized. After a randomized time period, an auditory cue was presented signaling either a Go or a No-Go (NG) trial. Correct reresponses to both Go (response) and NG (withhold) cues were rewarded. Conversely, incorrect Go and NG trials were noted as omission and false alarm (FA), respectively, and no reward was delivered (see Materials and methods for details). Precue response rate and FAs provide two important but distinct parameters of impulsivity in this task. FA responses in the port during the presentation of the NG cue indicate the capacity to restrain prepotent motor responses (called stopping impulsivity), indicating cognitive executions; they are thus an index of failure in cue-related withhold behavior. Precue responses, by contrast, reflect impulsive action in the preparatory phase of the task, indicating how long an animal is willing to withhold a response (called waiting impulsivity) before a cue appears . These forms of impulsivity may have different translational implications for psychiatric symptoms, as their involvement in pathobiology for obsessive compulsive and addiction-related conditions, respectively, has been shown . To map hotspots in trait impulsivity within brain networks, we first trained an initial cohort of mice in our cued GNG task . We then set a behavioral contrast between the top 75th percentiles and bottom 25th percentiles on a compound impulsivity measure of both precue response rate and incorrect NG responses (FA) . This served as a boundary for high and low impulsive (HI and LI) animals, respectively , which expressed stable impulsive traits over the course of at least three sessions ( ,ii). To isolate variance in impulsive behavior, independent of motivational state or performance in task execution, HI/LI groups were filtered for similar levels of correct Go responses (>95%, and ) and total number of licks at reward delivery . Both the HI and LI groups were analyzed by resting-state functional magnetic resonance imaging (rs-fMRI), as straightforward means to map differences in brain network connectivity – in this case, differences that correlate with HI/LI behavioral traits. The goal of the fMRI analysis was, first and foremost, to highlight the top ranked nodes for impulsive traits. To establish such a ranked list reliably, we used an experimental design suitable for small sample sizes to limit the number of animals subjected to behavioral handling and the fMRI operating time and associated costs. Such a statistically low-powered screen is liable to false negatives; however, this design and the following analyses were tailored to identify the top-most ranked nodes. To trace the most prominent changes in the general brain network, we screened seed-wise for the most affected nodes. To this end, we rank-ordered node connectivity differences between HI and LI groups using group means to reduce effects from individual animal variance in low sample size settings . Specifically, after computing the brain-wide functional connectivity in HI and LI animals for each region (node), we determined differences in connectivity between the two groups, which were then evaluated for statistical significance. For the comparison of HI and LI groups, these subtracted correlations report effect sizes and rank more intuitively than simple p-values, in particular given the low sample size. We note that this data could be extended to delineate the associated specific impulsivity functional subnetworks (edges) in a follow-up graph theoretical workup of our data. Our rs-fMRI analysis was designed to locate hotspots (i.e., nodes) related to impulsive traits. This analysis revealed 13 (of 51) brain nodes that differed in their functional connectivity between HI and LI groups . In HI animals, several brain nodes had higher functional connectivity, when compared to their LI littermates; these included the superior central raphe nucleus (CSm), midbrain reticular nucleus (MRN), periaqueductal gray (PAG), agranular insular area (AI), infralimbic cortex (ILA), and nucleus accumbens (ACB). Previous studies have implicated many of these areas in modulating impulsive behavior, goal-directed actions, and in responding to rewards . In LI animals, by contrast, the STN had the highest functional connectivity when compared to the HI littermates, followed by the zona incerta (ZI) and amygdala (AMY) . Among these regions, the STN is a key region embedded in the basal ganglia known as the ‘indirect pathway of movement’. Thus, it is ideally situated to control motor activity in impulsivity-related tasks in animals and humans . To contrast functional networks for trait impulsivity with those from other behavioral parameters, we analyzed the rs-fMRI data of our cohort for brain regions involved in other behavioral parameters: the total distance traveled in the task and the total number of licks during the task, which indicates reward collection and is a proxy for the motivational state of the animals . Analysis of the effect sizes showed that STN ranked top for trait impulsivity but scored considerably lower for motor drive ( and ); the ZI and AMY, likewise, ranked high for trait impulsivity and low for motor drive. The MRN and PAG, by contrast, showed strong differences for both impulsivity and distance traveled, whereas they showed no significant effect for licks. The CSm showed a high correlation for HI parameters but the opposite effect for licks. Lastly, we found no brain region significantly different in all three parameters. Taken together, the rs-fMRI screen identified the STN as a modulatory hub with comparably high specificity for trait impulsivity. This suggests that the STN, in addition to its canonical role in basic motor control, is also the main site of naturally occurring variance in trait impulsivity. To dissect further the specific roles of the STN in impulsivity and motor function, we performed in vivo extracellular recordings during the GNG task . We found mild responses in the STN in response to the light cue that signaled the beginning of the precue period , indicating specific activity in the waiting period of the task. As expected, we also found responses to the Go sound onset , but we saw no significant response to the auditory cue signaling an NG trial, whether the animals responded with an FA or with a correct withhold . Consistent with the role of the STN in reward processing , several units modulated their activity upon reward . Surprisingly, reward-related information is processed by units separate from those potentially gating Go responses . From these data, we conclude that responses to the Go cue and to the reward are encoded by separate channels in the STN. Using distinct channels may uncouple reward value from response control during conditional responding. Our analysis also revealed STN responses coupled to the precue, Go, and FA pokes , and we detected unit populations within the STN that responded to the onset of immobility and movement . These immobility onsets were spread across various task periods; the majority occurred either in the precue or ITI periods of the task, and fewer than 5% occurred during presentation of the NG sound . These data indicate that the STN encodes preparatory withholding, akin to behavioral inhibition, rather than cognitive control of Go vs. NG cue-dependent behavioral discrimination . We next explored the encoding of these features in the principal component analysis (PCA) space of the population vector activity in the STN. We found distinct vector paths for subsets of these parameters, which suggests rather rigid separation of some stimulus and behavior variables. Responses to the Go sound and reward onset had similar trajectories in PCA space , with weak general response to both light and NG sound onsets. Moreover, cell-by-cell analysis revealed that these states were encoded across STN units and each unit carried multiple signals . Precue poke trajectories (and to a lesser extent FA pokes) were separated from the trajectories for immobility and Go pokes . These data indicate that the STN encodes features of impulsivity, particularly those associated with precue pokes, as a behavioral state distinct and dissociable from Go responses. The sets of units that responded to precue pokes with decreased firing rates, varied in their firing patterns for other behavioral variables, for example, non-responding, increasing, or decreasing upon Go poke or immobility onsets . Taken together, these findings suggest multiplexed encoding of impulsivity and motor features in the GNG task at the unit level. Since many of the investigated units were bound to a precue poke event, we examined whether there might be a correlation between impulsivity behavioral events in this task period and STN electrophysiological activity. As a proxy for an animal split on impulsivity level, we grouped the pooled precue periods into four categories with increasing number of behavioral impulsivity events – the precue pokes . The unit population suppressed during the precue poke event significantly decreased its average firing rate with increased incidence of behavioral events (negatively correlated with precue pokes). However, the activated population showed the converse effect (positively correlated with precue pokes). Importantly, the mean activity of the negatively correlated population was higher in periods without precue pokes compared with units activated by this behavioral event, strongly suggesting that this population acts as an inhibitory gatekeeper for this behavior. Local units are bound by time-locked oscillations that organize functional coupling to intra-STN and mesoscale brain networks. Specifically, beta-range coupling in the STN is associated with action inhibition , whereas gamma-range coupling is associated with action execution . To study the oscillatory activity during the GNG task, we compared event- (precue, Go and NG sound periods) and behavior- (immobility) related spectral powers in the task . High theta power during immobility might reflect pre-decision-making for action selection . As expected from the role of STN in action inhibition, the immobility period showed the strongest LFP power in the beta band ( inset), while the highest gamma power was bound to the Go sound cue ( inset), reflecting beta- and gamma-related behavioral inhibition and execution, respectively. To investigate these patterns in more detail, we used time-resolved power spectrograms centered around the onset of events and behaviors. This analysis revealed action-locked STN activity probably linked to reward expectation/consumption . This signal was absent from NG-cued and non-reinforced FA pokes . By contrast, immobility onset correlated with an overall increase in theta power, followed by a drop of beta and gamma activity . Unlike the ITI periods, trial onsets significantly reduced theta and beta bands, suggesting behavioral disinhibition and lower action thresholds during the precue period . This pattern was less pronounced during the NG sound, indicating a bias toward waiting impulsivity in the STN . Consequently, by analyzing the power spectra around precue pokes, we found a strong decrease in the theta, beta, and gamma bands prior to a precue poke , suggesting that neuronal decoupling of STN from its intra-STN and mesoscale networks results in impulsive choice. Overall, this picture recapitulates the decoupling of the STN from global networks in HI animals . To assess the gatekeeping function of the STN in the GNG task, we used an optogenetic approach. By using adeno-associated viral vectors, we expressed the light-sensitive opsins channelrhodopsin-2 (ChR2) or archaerhodopsin (Arch) in the STN and implanted an optical fiber above the STN for light activation . Prior to behavioral testing, we functionally validated this approach by patch clamp recordings to measure activation and inhibition of the STN . The temporal resolution of optogenetics allowed us to investigate STN function specifically in the precue and cue phases of the task. Inhibition of the STN during the precue phase significantly increased the precue response rate, whereas activation had the opposite trend . These opposite effects strongly suggest bidirectional control of impulsivity by the STN. Under this stimulation regime, we also observed post-laser effects: the latency to respond to both Go and NG cues was prolonged in the manipulations, however, it did not strongly impact the number of responses in either the Go or NG trials . As expected, cue-bound optogenetic manipulation of STN function did not affect the precue response rate , however, it did reduce the fraction of Go responses in both the ChR2 and Arch groups , with alterations in latency to respond . We speculate that this unidirectional effect indicates that activation or inhibition of the STN impacts general task performance (as measured by the Go parameter). Consistent with this, STN lesions in animals and deep brain stimulation (DBS) in humans altered the number of cue responses . Also, studies in which STN activity was increased by blocking GABA-ergic inputs resulted in abnormal movements and decreased behavioral performance . Optogenetic activation or inhibition of STN function had no effect in NG trials on either response numbers or their latency , thus dissociating precue effects on waiting from cue effects stopping impulsivity. This suggests that, at lower intensity regimes (see below, ), STN function mainly affects waiting impulsivity and, to a lesser extent, cue-related responses, even though we found precue responses and FA responses were both represented in the STN ( , FA-related trajectories). Taken together, our findings from optogenetics and electrophysiological recordings show that the STN encodes and controls impulsive action, especially related to precue waiting impulsivity. Altered glutamatergic neurotransmission is implicated in the pathobiology of impulsivity-related mental disorders . Moreover, pharmacological modulation of both ionotropic and metabotropic glutamate receptors affects impulsivity . The metabotropic glutamate receptor 4 (mGlu4) emerged as a strong candidate, as it is widely expressed in the STN and the globus pallidus, which are both elements of the indirect pathway of movement in the basal ganglia . Presynaptic activation of mGlu4 reduces, but does not abrogate, neurotransmitter release, making fast glutamatergic neurotransmission accessible to neuromodulatory therapeutic intervention. In fact, positive allosteric modulators (PAMs) of mGlu4 show promising results in pre-clinical and clinical trials as potential therapeutic agents to reverse motor dysfunction in Parkinson’s disease (PD) (reviewed in ; ), but can increase impulsivity in rats . We hypothesized that mGlu4 can specifically gate impulsive action via glutamatergic modulation of STN output and in consequence the expression of impulsive traits. To investigate the possible role of mGlu4 in impulsivity, we used a PAM specific for this receptor, 4-(( E )-styryl)-pyrimidin-2-ylamine (mGlu4 PAM; ; ). We delivered mGlu4 PAM to HI/LI animals and assayed for trait-dependent modulation of impulsivity . This PAM increased the precue response rate , without overtly affecting FA ( , ,C left) or Go responses ( , ,C right) in the GNG task predominantly in LI animals. Thus, the effect appears to depend on baseline trait impulsivity. To investigate the possible neuromodulatory interaction of mGlu4 with trait impulsivity in the STN circuitry, we treated HI and LI animals with mGlu4 PAM and then evaluated its effects throughout the brain by rs-fMRI. As for the comparison of HI and LI animals , we used a node-centric approach to rank-order hotspots of mGlu4 x HI/LI interaction. Again, rather than reporting p-value statistics, we reported the data as F values of the interactions and filtered for significance of the small sample size. Using this approach, we could directly map the interaction between mGlu4 modulation of brain functional connectivity and impulsive behavioral traits. As expected from the broad distribution of mGlu4 in the limbic system, mGlu4 PAM affected many brain areas , resulting from drug action on this distributed mGlu4 expression. Thus, this method should be sensitive to monitor the interaction of mGlu4 PAM treatment and HI/LI animals, which should reveal hotspots relevant for mGlu4 modulation of trait impulsivity. This node-wise analysis of functional connectivity ranked the STN as the brain node with the highest impulsivity x treatment interaction . Rank analysis of this interaction, together with group splits for distance traveled and licks , showed that the effect on the STN is specific for impulsivity . Both the AMY and ZI ranked much lower in the interaction score for impulsivity than in the HI/LI functional connectivity split, indicating another role for mGlu4 in these structures. To assess how the mGlu4 PAM network effects are reflected at the cellular level, we compared STN activity between vehicle and mGlu4 PAM treatment. The mGlu4 PAM reduced the fraction of units inhibited upon precue poke when compared to the controls , and this fraction correlated negatively with waiting impulsivity (putative behavioral gating units; , D–E). We observed no such effect in units gating the onset of immobility . These data suggest that mGlu4 PAM specifically antagonizes STN activity to gate impulsive behavior. A drop in LFP power in the STN correlates with precue pokes . Thus, we reasoned that mGlu4 PAM should decrease LFP power in the STN. Consistent with the elevated precue response rateupon treatment with mGlu4 PAM , LFP power in the beta and gamma band was reduced during the precue phase when compared to the vehicle control ( , inset). In the vehicle control, STN LFP power dropped prior to precue pokes ( left), as observed above (baseline, ), whereas after mGlu4 PAM treatment beta and gamma power were uncoupled from precue pokes ( right). This mGlu4 PAM-mediated uncoupling was most evident when we compared the non-normalized absolute LFP power preceding precue pokes (left axes) in the vehicle control and after mGlu4 PAM treatment, and when analyzing the variance in raw LFP power around precue pokes (right axes, ). This showed a significant reduction in STN LFP modulation during mGlu4 PAM treatment upon impulsive action. We propose that these changes reflect intra-STN modulation of STN neural activity and locally generated LFP by mGlu4. The mGlu4 receptor is expressed mainly on pallido-subthalamo and subthalamo-nigral synapses , making it well situated to gate neuronal processing through the STN , either by acting on GP inputs or at STN outputs to the substantia nigra pars reticulata (SNr). Furthermore, mGlu4 PAMs modulate subthalamo-nigral pathways ex vivo . Thus, we hypothesized that mGlu4 PAMs might modulate STN outputs to affect impulsivity in vivo. To test this hypothesis, we performed an electrophysiological characterization of SNr . Unlike our findings with the STN, we saw no responses in the SNr coupled to light onset ; however, we found event-coupled units to Go sound , NG sound followed by FA , rewards , correct withhold , and units bound to behavioral events such as to precue pokes , Go pokes , FA pokes , immobility , and movement onset . In contrast to neurons in the STN, the firing rate of SNr neurons remained constant when trials were split by the number of precue responses ( , see for comparison). Moreover, when compared to the population in the STN, the SNr population activity in PCA space showed less separated paths for behavior and for stimulus-centered events . As in the STN, SNr cell-by-cell analysis revealed differential encoding of precue pokes and Go sound responses, but to a lesser degree than in STN , as for event onsets . This difference between areas might reflect the functional proximity of the SNr to basal ganglia motor output, where impulsivity and motor commands may separate less than they do in the STN, which encodes traces of higher cognitive decisions. To investigate this possibility at the SNr network level, we inspected event- (precue period, Go, and NG sound periods) and behavior- (immobility) related spectral LFP in our task. SNr beta power increased during precue phases and reflects beta band synchrony between the STN and SNr, generally associated with behavioral inhibition . Consistent with the proximity of the SNr to basal ganglia motor output, SNr gamma power transiently increased preceding execution of an action . The increased beta power seen upon the NG cue reflects SNr behavioral inhibition recruited by the withhold signals in the task . The strong motor binding of SNr oscillatory activity (as opposed to STN activity, ) suggests SNr modulation by sources other than the STN . Since the mGlu4 receptor is known to modulate synaptic transmission between the STN and SNr, we reasoned that the effects of mGlu4 PAM in the SNr should be particularly pronounced in the SNr. Indeed, mGlu4 PAM treatment modulated SNr recruitment to precue pokes ( ,C), but not its binding to basic motor output ( ,C). At the level of LFPs, we observed only a minor reduction in overall LFP power upon mGlu4 PAM treatment . However, this was paralleled with a significant uncoupling of theta and beta band coupling to precue pokes in the SNr , and in the variance of raw LFP power around precue pokes . The LFP power spectra and their modulation by mGlu4 PAM ( , Ei) explain the dominant interaction of STN with impulsive choice, compared to SNr and its interaction with mGlu4 modulation . Together, these data suggest that mGlu4 modulates the control of impulsive action by the STN and SNr, without altering activity bound to motor output. We found that the STN is a hotspot for the interaction of mGlu4 PAM activity with trait impulsivity, strongly suggesting that mGlu4 expression in the STN modulates this behavioral trait directly. We examined animals with the HI or LI phenotype (similar group separation to our rs-fMRI experiment) in our GNG task and performed high-resolution in situ hybridization to locate areas of mGlu4 expression in samples from both groups . Quantification of the in situ hybridization signal revealed a statistically significant difference in Grm4 mRNA expression in the STNs of HI and LI animals, but not in the other brain regions we analyzed, including the ZI, thalamus, and AMY . As expected, the predominant signal in the perinuclear portion and in VGlut + neurons (detected by a probe mix for Slc17a6 and Slc17a7 ; ) covaried most strongly with the HI and LI phenotypes. This suggests that locally synthesized mGlu4 is closely linked to trait impulsivity (as opposed to STN afferents also expressing mGlu4). To study the function of mGlu4 in trait impulsivity, we used adeno-associated viral vectors expressing short hairpin (sh)RNAs against the receptor (mirE-mGlu4) . First, we tested the efficacy of the vectors in Neuro-2a cell culture, a cell type known to express mGlu4 endogenously . The shRNAs effectively lowered mGlu4 protein levels, as assayed by Western blot . We injected the validated vectors into the STN of HI and LI animals . HI animals injected with mirE-mGlu4 showed a persistent decrease in precue response rate when compared with the control group post-surgery ; this effect was absent in LI animals . As with pharmacological inhibition, we saw no changes in FA or Go responses . Taken together, the data from our optogenetic, pharmacological, mGlu4 expression, and silencing experiments show complementary phenotypes in re-occurring patterns in HI and LI animals. These are consistent with mGlu4 in the STN modulating (waiting) impulsivity traits. To compare the consequences of modulation of STN activity on impulsive behavior with those on general motor function , we performed a series of open-field experiments using various means to manipulate STN function, as above. As expected, high-power optogenetic activation of the STN suppressed motor output . However, low-power optogenetic activation of the STN, as used in the GNG task and akin to pharmacologically blocking GABA transmission to the STN , did not alter motor behavior. Optogenetic inhibition, which might be expected to be less effective than complete STN lesions , modulated impulsivity in the GNG task independent of general motor effects . Furthermore, both pharmacological and shRNA-mediated inhibition of mGlu4 activity affected impulsivity without affecting general motor behavior. Moreover, in a separate, untreated cohort of animals, we found no significant correlation between distance traveled in OF and either precue response rate or percent of trials with FA . This finding is consistent with reports that general motor activity does not clearly correlate with impulsivity in different mouse strains . Thus, we conclude that weak manipulations that specifically target impulsivity but not motor functions lead to the observed phenotype in the GNG task. We speculate that this differential behavioral effect reflects the separation of motor and impulsivity-related stimuli and behaviors in different regimes of STN activity . Our study exploits the inherent variance in animal behavior to deconstruct neuronal circuit mechanisms and genetic factors that underlie trait impulsivity. By combining small animal fMRI with circuit neuroscience methods , we were able to identify brain hotspots accounting for natural variance in trait impulsivity in an unbiased way. The STN emerges as the top ranked candidate, with higher functional connectivity in LI animals reflecting behavioral gating. Conversely, the lower level of STN functional connectivity in HI, we suggest, is a sign of less effective interactions or rapid transfer of information between and across the STN and the rest of the brain network. We validated our rs-fMRI screen with subsequent electrophysiological profiling and with optogenetic, pharmacological, and genetic manipulations. These mechanistic analyses reveal that the STN circuitry dissociates basic motor functions from impulsive responses. Additionally, we find that both STN activity and mGlu4 expression depend on the intrinsic trait impulsivity of the subject. Our findings suggest that STN, a known node in basal ganglia motor processing, is also crucial for the control of trait impulsivity via mGlu4. The identification of STN controlling trait impulsivity adds a novel dimension to previous studies on impulsivity and STN function. STN activity was previously shown to respond to stop cue presentations, contributing to behavioral inhibition . Additionally, it was functionally linked to Go and NG trial discrimination, and contributes to reactive control and behavioral choice . Moreover, STN lesions and pharmacological studies link its function to impulsive action , whereas human fMRI and DBS studies confirmed its key role in motor control and implied role in ICDs . Here, we extended these findings on the role of the STN by identifying it as one of the main hotspots for trait impulsivity in a brain-wide screen. Consistent with this role, human fMRI and electrophysiology studies showed that the magnitude of STN activation during stop trials correlated inversely with both trait impulsivity and reaction time in GNG tasks . We note, however, that these studies of human subjects were performed mainly on patients with PD . Here, we demonstrate a specific role of the STN in controlling trait impulsivity and find that impulsive action is an STN neuronal state that is separable from motor activity. Our data reveal that the STN is engaged in one of the two main impulsivity parameters, the precue response rate over NG cue withholding ( ,Bi,iii,C,D) in a free-moving, cue-initiated GNG task. We and others showed that the precue response rate and FAs are highly correlated, as both subserve behavioral inhibition and serve as reliable measures of impulsivity in GNG tasks . Our data showed a significant correlation between precue response rate and the percentage of trials with FA , but no correlation between the percentage of correct Go responses and either parameter ( ,H). This suggests independent processing of each task response. We argue that the precue responses may reflect anticipatory, premature behavioral execution (in the domain of waiting impulsivity); this responding before the reward is actually due is similar to the premature responding seen in five-choice serial-reaction time tasks (5-CSRT) . Premature responses in 5-CSRT and 4-CSRT tasks were previously interpreted as waiting impulsivity, a facet of impulsivity distinct from motor impulsivity , in both human and rodent studies . Moreover, this aspect of impulsivity was associated with the development of drug addiction, as well as an expression of attention deficit in ADHD . FAs indicate learning the instrumental omission contingency and thus are associated with inhibition of impulsive action (stopping impulsivity) . Based on our results, we conclude that the STN encodes the gating of this behavior through a subset of neurons inversely correlated to precue responses . This was further confirmed by our optogenetic inhibition studies, which showed a significant change in the precue responses when STN activity was manipulated, with no effect on cue-related impulsivity . Finally, our findings are consistent with previous human studies targeting the STN in relation to waiting impulsivity and the development of addictive behavior . Previous pharmacological studies using systemic administration of an mGlu4 PAM found a link between impulsivity and mGlu4 activity . Here, we confirm and extend this finding by linking the action of the mGlu4 PAM to baseline impulsivity levels, showing a state-dependent effect. Moreover, we provide insights into the neuroanatomical substrates and mechanism of action underlying the drug effect. Despite the widespread expression of mGlu4 in the brain, we demonstrate by rs-fMRI analysis that the STN is the crucial node in the interaction between mGlu4 PAM and trait impulsivity. In this context, our electrophysiological and rs-fMRI data sample mGlu4 modulation upstream and downstream of the STN. To assess postsynaptic effects within those networks, we analyzed the activity of one of the main STN outputs, the SNr. These recordings revealed encoding of several GNG features within that nucleus , along with strong binding to immobility and movement onset , consistent with SNr motor functions. Since dissociating impulsivity and motor features deteriorates from the STN to the SNr, this result may indicate their proximity to cognitive function and basal ganglia output, respectively . These findings are mirrored by rs-fMRI results that are less distinct between HI and LI in SNr compared to STN . Notably, mGlu4 directly attenuates temporal dynamics (variance) in STN/SNr LFP power. We interpret this as a signature of uncoupling of the STN/SNr from interconnected functional mesoscale networks, so affecting behavioral inhibition during precue phases. This puts into context studies that link STN/SNr connectivity to motor behavior in humans, with a pronounced bias toward waiting impulsivity . That the mGlu4 PAM was efficacious following systemic administration further strengthens the idea that mGlu4 might serve as a therapeutic target for the treatment of maladaptive impulsivity in psychiatric patients. Together, we provide mechanistic insights into the dynamics of how mGlu4 PAMs modulate brain circuitry at the level of STN. The gene encoding human mGlu4, Grm4 , has not previously been implicated in impulsivity phenotypes, neither by findings of single nucleotide polymorphisms associated with impulsive disorders nor by differences in expression levels in humans. Meta-analyses and genome-wide association studies (GWAS), however, have found associations between Grm4 and several psychiatric disorders that share the impulsivity domain (see ), such as PD, bipolar disorder, and ADHD . Some of these studies link to SNPs at miRNA binding sites at mGlu4 or miR (micro-RNA) of which mGlu4 is target of, while both could affect gene expression. These data, together with human fMRI studies, suggest that mGlu4 modulation of STN function may contribute to impulsivity in humans. Here, we demonstrate that silencing of Grm4 specifically in the STN decreases the impulsive behavior of HI animals . While we cannot exclude the contributions of upstream mGlu4 PAM target regions, our results indicate a presynaptic site of action downstream of STN driving the impulsivity phenotype. In this regard, we observed significantly lower mGlu4 levels in rodents selected for low trait impulsivity compared to HI rodents . As mGlu4 modulation of STN efferent activity has been demonstrated in vitro, we speculate that high mGlu4 expression decreases STN signaling and fine-tunes the inhibition of impulsivity by the STN. In summary, by combining rs-fMRI, electrophysiology, optogenetics, pharmacology, and genetics, we have discovered a previously unknown role of mGlu4 function in the STN as a crucial modulator of trait impulsivity. Synaptic modulation by metabotropic glutamate in the STN dissociates and fine-tunes impulsive traits independent of general motor function. We propose that mGlu4-driven neuromodulation of STN activity regulates impulsive states and biases toward impulsive action, while remaining subthreshold for affecting representation and execution of gross motor gating . In this way, the brain independently and selectively modulates a cognitive behavioral trait within a basic motor circuit by using metabotropic glutamate signaling . This demonstrates that multiplexing of different functions (here, impulse control and general motor gating) within the same network via neuromodulation allows the brain to control and adapt multiple behaviors with limited circuitry. From a translational perspective, the STN, mGlu4, and metabotropic glutamate may emerge as potential targets for impulsivity treatments. This opens opportunities for therapeutic interventions that selectively target pathological impulsivity without affecting general motor performance. Limitations of the study Trait differences within an isogenic strain allow us to discover environmental factors and/or epigenetic changes that play a role in behavior. We focused in this study on mGlu4 because human GWAS indirectly linked the GRM4 gene with impulsivity and provided a rationale for our investigations . Exploring inter-individual variation in impulsive behavior in an isogenic animal model, however, cannot address the entire spectrum of genomic variations that might influence impulsive traits, particularly in the human context. We recognize the limitations of the study due to the use of male subjects only. This is not unique to this impulsivity study . We chose to study males not only because surveys by the World Health Organization show a significant gender bias toward males in the prevalence of ADHD but also to allow us to compare our findings to those of previous studies cited in this paper . Moreover, we expected that genetic homogeneity in our test subjects would improve the statistical power of the study, given the constraints in sample sizes. That said, our study paves the way for important future studies to explore if and how the mechanism identified here differs in females. Finally, additional data on the effects of mGlu4 PAM delivered directly to the brain might further support the mechanism we propose. At this point, however, the properties of the compound do not allow direct brain parenchymal infusions, for that purpose. Trait differences within an isogenic strain allow us to discover environmental factors and/or epigenetic changes that play a role in behavior. We focused in this study on mGlu4 because human GWAS indirectly linked the GRM4 gene with impulsivity and provided a rationale for our investigations . Exploring inter-individual variation in impulsive behavior in an isogenic animal model, however, cannot address the entire spectrum of genomic variations that might influence impulsive traits, particularly in the human context. We recognize the limitations of the study due to the use of male subjects only. This is not unique to this impulsivity study . We chose to study males not only because surveys by the World Health Organization show a significant gender bias toward males in the prevalence of ADHD but also to allow us to compare our findings to those of previous studies cited in this paper . Moreover, we expected that genetic homogeneity in our test subjects would improve the statistical power of the study, given the constraints in sample sizes. That said, our study paves the way for important future studies to explore if and how the mechanism identified here differs in females. Finally, additional data on the effects of mGlu4 PAM delivered directly to the brain might further support the mechanism we propose. At this point, however, the properties of the compound do not allow direct brain parenchymal infusions, for that purpose. Subjects C57Bl6/J male mice purchased from Jackson Laboratory were used in all experiments. Animals were housed in groups of max 5 and kept under a 12:12 hr light:dark cycle with food and water ad libitum. Experiments were conducted during the light period. Water deprivation was practiced over the period of GNG behavioral testing with exception of the post-surgery recovery period. All animal experiments were performed in accordance with institutional guidelines and were approved by the respective Austrian (BGBl nr 501/1988, idF BGBl I no 162/2005) and European (Directive 86/609/EEC of November 24, 1986, European Community) authorities and covered by the license GZ2452882016/6. Overall animal history is shown in . Ex vivo electrophysiology Male wild-type mice (2–3 months of age) were deeply anesthetized with isofluorane, decapitated, and their brains quickly chilled in sucrose-based dissection buffer, bubbled with 95% O 2 /5% CO 2 containing the following (in mM): 220 sucrose, 26 NaHCO 3 , 2.4 KCl, 10 MgSO 4 , 0.5 CaCl 2 , 3 sodium pyruvate, 5 sodium ascorbate, and 10 glucose. Transverse coronal brain slices (300 µm) were cut in dissection buffer using a Vibratome (Leica, VT1000S) and immediately incubated for 15 min recovery phase in oxygenated artificial cerebrospinal fluid (aCSF) 126 NaCl, 2.5 KCl, 1.25 NaH 2 PO 4 , 26 NaHCO 3 , 2.5 CaCl 2 , 2.5 MgCl 2 , and 25 glucose in 95% O 2 /5% CO 2 at 32°C. This was followed by a slice resting phase with oxygenated aCSF for at least 45 min at room temperature (RT). Individual brain slices containing STN were placed on the stage of an upright, infrared-differential interference contrast microscope (Olympus BX50WI) mounted on a X-Y table (Olympus) and visualized with a 40× water immersion objective by an infrared sensitive digital camera (Hamamatsu, ORCA-03). Slices were fully submerged and continuously perfused at a rate of 1–2 ml/min with oxygenated aCSF. Neurons, either infected by AAV::Arch or AAV::ChR2, were identified by the presence of YFP fluorescence. Patch pipettes were pulled on a Flaming/Brown micropipette puller (Sutter, P-97) from borosilicate glass (1.5 mm outer and 0.86 mm inner diameter, Sutter) to final resistances ranging from 3 to 5 MΩ. Internal solution for voltage-clamp recordings of responses to optogenetic stimulation contained (in mM): potassium gluconate, 135; KCl, 5; HEPES, 10; MgCl 2 , 2; EGTA, 0.2; MgATP, 4; Na 3 GTP, 0.4; K 3 -phosphocreatine, 10; biocytin, 0.1; pH 7.2 (with KOH). Cells were held at –70 mV. Cells were allowed to reestablish constant activity during 5 min waiting time after breaking the seal. In case of AAV::ChR2: Increasing frequencies (5, 10, 20, 40, 80 Hz) of optogenetic pulses (473 nM) were applied to test for opto fidelity. In case of AAV::Arch, cells were subjected to an increasing ramp of 1 s depolarizing current pulses, each one accompanied by 400 ms of 20 Hz optogenetic pulses (563 nM) after 300 ms. Stereotaxic surgery Surgeries were performed using a Model 1900 Stereotactic Alignment Instrument (David Kopf Instruments) and a Model 1911 stereotactic drill (David Kopf Instruments). For injections, a Nanoliter 2000 injector, driven by a Micro4 MicroSyringe Pump Controller (World Precision Instruments), was used. Needles for virus injection were pulled from 3.5 nl glass capillaries (World Precision Instruments) on a Micropipette Puller (Model P-97, Sutter Instruments). The surgical protocol was adapted from : Mice were deeply anesthetized in the stereotactic frame with isofluorane (1.7%, IsoFlo, Abbot Laboratories) and anesthesia was verified by testing deep plantar reflexes. Gentamicin ointment (Refobacin 3 mg/g, Merck) was used to protect the animals’ eyes, and their body temperature was kept constant at 36°C using a heating pad. For optogenetic manipulation, one of the following viral constructs (see Key resources table) was bilaterally injected directly into the STN at –1.85 mm AP/ ±1.60 mm ML/–4.70 mm DV from bregma at a speed of 20 nl/min followed by a 5 min waiting period where the needle was kept in place in order to avoid leakage. Final injected volume was 100 nl in case of AAV::GFP and AAV::ChR2 groups, with 200 nl used for the rest of experimental groups. After the injection optic fibers (MFC_200/245-.053_5.0 mm_ZF1.25(G)_FLT from Doric Lenses) were implanted at –2.18 mm AP/±1.60 mm ML/–4.55 mm DV from bregma. For electrophysiological assessment, the animals were trained in GNG task till achieving performance and silicone electrodes (A1 × 16 poly2 50 × 375; Neuronexus) were implanted at STN (–1.85 mm AP/±1.60 mm ML/–4.70 mm DV from bregma) or SNr (–3.14 mm AP/±1.50 mm ML/–4.70 mm DV from bregma). Ground screws were mounted above the contralateral prefrontal cortex and cerebellum. All implants were fixed to the skull with dental cement (SuperBond C&B kit, Prestige Dental Products). Animals were given enrofloxacin (100 mg/ml, Baytril, Bayer Austria) and carprofen (Rimadyl, 50 mg/ml; Pfizer Austria) via drinking water for at least 7 days and were granted a resting period of at least 14 days before behavioral retraining commenced. Viral knockdown of mGlu4 To suppress mGlu4 expression in the STN, we constructed an AAV-based vector expressing GFP and miRNA-adapted shRNAs in the optimized miR-E backbone (under control of the SFFV promoter; AAV-SFFV-GFP-miR; ASGE), as described elsewhere . Four independent shRNAs targeting mGlu4 (guide sequences: 5’-TAGTA) were designed based on optimized design rules (see Key resources table) and cloned into miR-E and the mix was used to make viral preparation for surgical injection. To test the knockdown efficiency, mouse neuroblastoma cell line (Neuro-2a, ATCC CCL-131) were transfected with the constructs mix or control plasmid using Lipofectamine 3000 according to manufacturer’s instructions. Cells were harvested 72 hr after transfection, cell pellet was resuspended in extraction buffer consisting of 20 mM Tris–HCl (pH 7.5), 100 mM NaCl, 5 mM MgCl, 2 mM NaF, 10% glycerol, 1% NP40, 0.5 mM DTT, supplemented with protease inhibitor cocktail (Complete EDTA‐free, Roche) and lysed on ice for 5 min. Input lysates and immunoprecipitates were resuspended in SDS sample buffer and heated to 95°C for 2 min. Total protein amount was assessed using Pierce BCA Protein Assay Kit according to manufacturer’s instructions (Thermo Fisher Scientific). Pre-cast NuPAGE Novex 4–12% Bis-Tris midi gels (Invitrogen, XP04122BOX) were run in NuPAGE MOPS SDS running buffer (Thermo Fisher Scientific, NO0001). The protein samples were transferred onto a cellulose membrane using semi-wet transfer. The membranes were blotted with rabbit polyclonal antibody against mGlu4 (1:500 ab53088, Abcam) and mouse monoclonal anti-β-actin (1:5000 A5441; Sigma). The mGlu4 expression was assessed with ImageJ and normalized to β-actin signal for each sample. GNG task and performance criteria The protocol was based on published material . All animals were water deprived over-night to increase their motivation for a milk reward (10% condensed milk in water). The experiments were conducted in, in-house customized, Coulbourn behavioral testing boxes. For the experimental setup, a custom-built port with rectangular entrance in which the animal can insert the entire head was used. This port was equipped with a liquid delivery system and a blue/yellow (balanced) LED behind the port which can illuminate the port. Furthermore, it possessed two IR beams, first one to detect nose pokes (visits to port) and an additional one to detect licks at the liquid delivery system tube. The cage was cleaned with 70% ethanol before each mouse. Prior to GNG training, animals were habituated for one session in which the reward was dispensed in the port at variable intervals. Each reward occurrence was associated with a click sound that was then used throughout the entire behavioral experiment. The habituation session contained 60 reward deliveries without cue presentation, with house light on and separated by light off inter-trial interval (ITI) of 10 s. Each trial of the GNG task consisted of four periods. The precue period of varying duration (9–24 s) was signaled by the house light turning on. Responses during this period were recorded but not reinforced. The variable timing was used to avoid predictive behavior toward the presentation of the cue. Any port visit during the last 3 s of the precue period terminated the trial (jump to ITI) in order to prevent false positive/false negative response counts upon cue presentation. Cue presentation followed the successful passing of the precue period and consisted of an auditory cue. The sounds chosen were a pulsed white noise for the Go trials and a 3 kHz pure tone for the NG trials, both set at 75 db intensity. The first visit to the response port, during the cue presentation, ended the cue and led to: (1) the reward delivery period (3 s), in case of a correct Go response; or (2) a jump to the ITI without reward delivery if a poke was detected during NG cue presentation (named false alarm, FA). Conversely, a successful NG trial was achieved by withholding poke responses in the port during the entire NG cue presentation period and resulted in reward delivery after cue end. In both correct responses, the reward was coupled with a ‘click’ sound and an LED light in the port and consisted of a 20 μl 10% milk solution. Licks were recorded for the duration of the entire trial. Each trial ended with a 10 s ITI during which the house light was off. Responses during this period were recorded but had no consequences. Following habituation, animals started the first training phase of the GNG task. Here, only Go trials were used, where an animal had to deliver a port response during the 30 s Go cue presentation. Training occurred daily and consisted of 60 Go trials in each session or 40 min, whichever came first. After the animals reached a correct response rate of at least 60% and their performance was stable over the course of three consecutive sessions, the next training phase started. In the second training phase, animals were introduced to the NG cue and cue presentation time was set to 10 s for both Go and NG trial types. Each session consistent in total of 60 trials (30× Go and 30× NG, random order) or 40 min, whichever came first. Animals were trained until they reached a correct response rate of at least 80% paired with an FA rate of maximum 45% and their performance was stable over the course of three consecutive sessions. Animals that reached these conditions further underwent stereotaxic surgery. After the post-operative rest period, the mice were retrained until they reached the performance criteria for the optogenetic and pharmacological experiments. For the shRNA experiment, animals were tested every second day for a total of seven sessions. Additionally, animals which on a non-treatment day performed less than 80% correct Go responses and/or made more than 60% of FA during NG trials were excluded. Passing these criteria, animals were assigned to HI/LI groups based on 25th/75th percentile or median splits. This yielded individually consistent categorization of HI/LI animals, which was comparable across experiments . Behavioral data was processed using MATLAB programs (R2015b, MathWorks, Natick, MA), videos were analyzed in Ethovision XT 8 (Noldus Information Technology, Wageningen, The Netherlands), both data types were then merged and processed using custom-made Python scripts (Python 3.3) with the final analysis being done in GraphPad Prism (Version 7). Open-field test Before each experiment, mice were allowed to habituate to the experimental room for at least 30 min prior to any testing. The optogenetic manipulation was done with three 2 min laser off periods alternated by 2 min laser on periods in between (for laser set up, see Optogenetic manipulation). In case of mGlu4 PAM evaluation, the animals were first administered the drug or vehicle in a latin square design (see Pharmacology) and after a 30 min waiting period were placed in the arena and allowed to explore for 30 min. In both cases a 50 cm (width) × 50 cm (length) × 29.5 cm (height) arena was used, and video tracked using AnyMaze software (Stoelting). In the software, a ‘center’ zone was defined as a central square 25 cm × 25 cm in size, the rest being the ‘border zone’ . The open-field evaluation of shRNA knockdown for mGlu4 was done at the preclinical phenotyping facility of the Vienna Biocenter Core Facilities GmbH (VBCF). The animals were transferred to the facility 1 week prior to experiments and housed at a 14 hr light–10 hr dark cycle in IVC racks with access to food and water ad libitum. After placing in the open-field arena (50 cm (width) × 50 cm (length) × 29.5 cm (height)), mice were allowed to explore for 30 min and were video tracked using TSE VideoMot 3D Version 7.01 software ( https://www.tse-systems.com ). In the software, a ‘center’ zone was defined as a central square 25 cm × 25 cm in size, the rest being the ‘border zone’. In all cases light conditions were about 300 lux in the center zone. After each trial, the apparatus was cleaned with water and 70% ethanol. Open-field experiments were performed in the morning (10:00 am–01:00 pm). The time spent in each zone, distance traveled, and number of center visits were recorded as readout parameters. mGlu4 PAM pharmacology 4-(( E )-styryl)-pyrimidin-2-ylamine (mGlu4 PAM, Cmp 11) was synthesized at Boehringer Ingelheim, Germany. The compound was dissolved in one volume of 0.1% Tween-80 (v/v) and nine volumes of 0.5% Natrosol and administered orally at 10 ml/kg, for a final dose of 80 mg/kg, 30 min before testing. The dose was chosen based on previous studies . In vivo electrophysiology After the surgery and recovery period, animals were handled and habituated to the recording room for several days prior experimental recordings. Electrodes were connected via an Omnetics connector to a 16-channel unity-gain headstage (Plexon) and the animal was left in the homecage for 10 min thereafter. The headstage was connected to a pre-amplifier where the signal was band-pass filtered (3 Hz – 8 kHz) and amplified. Neural activity was digitized at 40 kHz and highpass-filtered for spikes (800 Hz) and LFPs (3–200 Hz) for offline analysis. All recording sessions per mouse were merged. In general, three no treatment sessions, two vehicle, and two mGlu4 PAM sessions were concatenated together and split accordingly to type after unit sorting. Single units were sorted manually with Offline Sorter v4 (OFS, Plexon) in 3D PC feature space on unsorted waveforms and declared a single unit if the spike cluster was separable from noise and other clusters and no refractory period infringements were present. To avoid multi-sampling of single units, cross-correlograms of units from adjacent channels were inspected for co-firing and respective units removed from analysis. Simultaneously to neuronal data acquisition, animal behavior was recorded using both the sensor state (as described in the GNG task section) and image, with a video camera located on top of the cage, using CinePlex Studio. Video analysis and synchronization with neuronal recordings was then performed using CinePlex Editor. For the extraction of the immobility episodes, the motion measure data was averaged with the moving average window of 2 s, with the minimum duration of an immobility state set at 1 s (threshold 80). Episodes less than 0.5 s apart were merged. Finally, Neuroexplorer 5 was used to reconstruct the behavioral paradigm and treatment sessions. Unit activity is reported either as frequency (Hz) or normalized activity (Z-score transformation). Units with less than 20 spikes across the entire sessions were removed from further analysis. For event-related analysis (i.e., poke onset), the activity of each unit was transformed to Z-scores using the mean and SD of session-wide firing rate (250 ms bin windows) and moving-window smoothened using 1.5 s Gaussian. A unit was considered to be event-related if its activity in any of the post-onset bins (0 s to +1 s) was significantly different from mean baseline (–2 s to –1 s) activity across all trials using one-sample t-test. To show significant change of population firing from session-wide mean, a two-sided one-sample t-test against 0 for each time bin was used and was further corrected for multiple comparisons using cluster-based permutation testing on contiguously significant bins using 5000 iterations . In order to test if given responsive population on given event reaches significance, the null distribution of number of responsive units was built by performing 5000 iterations of randomly shifting the event onsets and performing the even-related analysis as previously described. The observed number of units was then compared to the obtained shuffled distribution and was considered significant if the observed number lied within the top 10% of the distribution. Low-dimensional representations for visualizing changing of population dynamics over time were constructed using PCA on trial averaged unit-based peri-event time course for the entire population using selected features (shown on the particular graph) and time window between –5 s and 5 s . Three mice were implanted for STN. Each animal was recorded for two to four independent non-treatment sessions resulting in a total of 74 units (average eight units per session), two sessions under vehicle resulting in a total of 27 units (average five units per session) and three sessions under mGlu4 PAM resulting in a total of 34 units (average of four units per session). Session-averaged precue response rates for each animal used for the STN electrophysiological analysis are presented in . Two animals were implanted for SNr. Each animal was recorded for four independent non-treatment sessions resulting in a total of 41 units (average five units per session), one session under vehicle resulting in a total of 11 units (average five units per session) and one session under mGlu4 PAM resulting in a total of seven units (average of three units per session). Optogenetic manipulation Animals injected with optogenetic AAV for later neuronal modulation during behavior underwent habituation for attaching a fiber-optic patch cord (Doric lenses) onto the implanted optical fibers. For ChR2 activation, laser trains of blue light (473 nm) consisting of 20 ms pulses with a frequency of 20 Hz (if not noted otherwise) were delivered at an intensity of 1–1.5 mW at the fiber tip, unless stated otherwise. For Arch-mediated silencing, laser trains of constant yellow light (568 nm) were delivered at an intensity of 8–10 mW. Intensity of all laser stimulations was measured before every experiment at the tip of the optic fiber via Power Meter (Thorlabs, PM100D). Laser stimulation was controlled by MATLAB scripts during GNG experiments and by Arduino boards running customized scripts executed by Any-maze software (Stoelting) during open-field test. mGlu4 PAM pharmacology 4-(( E )-styryl)-pyrimidin-2-ylamine (mGlu4 PAM, Cmp 11) was synthesized at Boehringer Ingelheim, Germany. The compound was dissolved in one volume of 0.1% Tween-80 (v/v) and nine volumes of 0.5% Natrosol and administered orally at 10 ml/kg, for a final dose of 80 mg/kg, 30 min before testing. The dose was chosen based on previous studies . Resting-state functional magnetic resonance imaging Animals were administered vehicle or 80 mg/kg mGlu4 PAM (see Pharmacology) and were left undisturbed for 30 min prior any MRI measurements. MRI was performed on a 15.2T Bruker system (Bruker BioSpec, Ettlingen Germany) with a four-channel phase array coil for mouse heads (Bruker, Biospec). Prior to imaging all mice were anesthetized with 4% isoflurane, while care was taken to adjust the isoflurane levels immediately so that respiration did not go below 100 beats per minute (bpm) at any time. During imaging, respiration was kept between 110 and 140 bpm. For the rs-fMRI study, single shot echo planar imaging sequence with spin echo readout was used (TR = 2000 ms, TE = 19.7 ms, FOV = 16 × 16 mm 2 , voxel size = 250 × 250 µm 2 , 30 slices 0.5 mm thick, 1 average, 240 repetitions, 8 min total imaging time). Following resting-state scan, a high-resolution T1-weighted anatomical scan was acquired using gradient echo sequence (TR = 500 ms, TE = 3 ms, FOV = 16 × 16 mm 2 , voxel size = 125 × 125 µm 2 , 30 slices 0.5 mm thick, 4 averages). rs-fMRI data were first bias-field corrected using N4ITK algorithm . Pre-processing was done using the Data Processing Assistant for Resting-state fMRI Advanced Edition (DPARSF-A) toolbox, which is part of the Data Processing and Analysis of Brain Imaging (DPABI) toolbox Version 3.1 ( http://rfmri.org/dpabi , ). The first 10 volumes were removed from each resting-state dataset. Data were processed in series of steps that included slice-timing correction, realignment, co-registration, normalization, and segmentation using in-house created mouse masks for cerebrospinal fluid (CSF), white matter (WM), and gray matter (GM). Nuisance covariates related to the motion were regressed out using the Friston 24-parameter model . In addition, WM and CSF mean time-series were used as nuisance regressors in the general linear model to reduce the influence of physiological noise . Global signal regression was used . Data were smoothed spatially with a 2.4 pixel full-width half-maximum Gaussian kernel. All data were co-registered to the in-house generated mouse atlas (with 102 distinct brain regions). Data were corrected for multiple comparisons using Gaussian random field theory multiple comparison correction (voxel-level p-value = 0.05, cluster-level p-value = 0.05). Functional connectivity (FC) patterns were compared between each group, by extracting the mean time-series BOLD signal of each of 102 brain regions. In order to investigate the general effect of impulsivity across all annotated brain regions, mean FC correlations were calculated for each animal under vehicle treatment, and the mean correlation matrices for each impulsivity group were subtracted from each other. Next, for each brain node, a one-sample t-test was performed against 0 if no difference on impulsivity levels, using Bonferroni for multiple correction. To assess the interaction effect between mGlu4 PAM treatment and impulsivity levels, FC correlation matrices were calculated for each animal under vehicle or compound treatment as before, followed, by two-way repeated-measures ANOVA on each of the respective brain node, using Bonferroni for multiple correction. Histological analysis To verify virus expression (see Key resources table for viral constructs) and correct locations of optical fiber tips and cannulae, animals were sacrificed using a mixture of 10 mg/ml ketamine (OGRIS Pharma) and 1 mg/ml medetomidine hydrochloride (Domitor, ORION Pharma) in 1× PBS and transcardially perfused with 40 ml of cold 1× PBS/heparine followed by 40 ml of 4% (PFA) in 1× PBS. Brains were immediately removed, post-fixed in 4% PFA at 4°C overnight, transferred to 30% sucrose solution for 24 hr and subsequently frozen in Tissue Tech O.C.T. on dry ice and stored at –80°C until sectioning. Coronal cryosections were cut at 20 µm thickness. Selected sections were counterstained with DAPI and mounted in Fluorescence Mounting Medium (Dako, S3023). Whole slides were then scanned using an automated widefield microscope (Pannoramic 250 Flash, 3D HISTECH Ltd.). Once images were acquired, regions of interest (ROIs) were marked by hand within Pannoramic Viewer (3D HISTECH Ltd). Expression of viral constructs and location of optical fiber tips/cannulae were then assessed for correct targeting . Viral expression was quantified using a custom ImageJ script, using an overlay of the appropriate sections from all animals within the experimental group and averaging the signal. In situ hybridization The mGlu4 expression analysis was done on cryosections with tissue prepared as described for the histological analysis extracted from GNG trained animals. The multiplexed in situ hybridization staining was done using the RNAscope system (RNAscope Multiplex Fluorescent v2 kit Cat no. 323100, Advanced Cell Diagnostics) according to manufacturer’s protocol. The tissue was co-hybridized with proprietary probes against Grm4 (Cat no. 480991), probe mix against Slc17a6+ Slc17 a7 (Cat no. 416631 and 319171) followed by differential fluorescence tagging (TSA Cy 3, 5, TMR Fluorescein Evaluation kit Cat no. NEL 760001KT, Perkin Elmer). Slides were imaged as described in the Histological analysis section. Images were processed using ImageJ, quantifying the total number of mGlu4 foci, and normalizing each ROI to the total number of detected nuclei using DAPI staining. For each brain region, ROIs at least two sections were selected for left and right hemispheres, averaged, and treated as independent samples. mGlu4 expression quantification was performed on these ROIs by automated analyses based on custom ImageJ scripts. The signal for each ROI was normalized to the average signal in the LI group for regional analyses. For cellular analyses, perisomatic cellular signals and cell type (VGlut) were identified using custom scripts. Cells were defined as VGlut + if their perisomatic signal was above the ROI section average for a conservative classification of VGlut + glutamatergic projection neurons. Statistical data analysis Sample sizes are in line with estimates derived from previous experiments, using G*Power Version 3.1.9.6. For behavioral experiments, target sample size was in the range of 6–10 animals (effect size 0.45, ). Animals were assigned randomly to all experimental cohorts. The behavioral experimenter was blind to the treatment of respective groups, wherever possible. All behavioral and data analyses were carried out blinded and using automated computational pipelines, wherever applicable. Establishing of the behavioral assay, neural recordings, and circuit manipulation were performed in independent experiments, with separate animal cohorts, wherever applicable. Basic behavior was replicated across experiments for control groups. Optogenetic manipulations were replicated on separate experiments and cohorts (biological replicates). All behavioral statistics were performed in GraphPad Prism (Version 7), unless otherwise indicated, and all statistical tests used are indicated in the figure legends. Experimental designs with one categorical independent variable were assessed by Shapiro-Wilk normality tests. If normality test passed, parametric statistics (t-test, one-way ANOVA) were applied. In case of non-normal distributions, non-parametric statics (Mann-Whitney U-test and Wilcoxon signed rank test) were planned. Experimental designs with two categorical independent variables were assumed to be normal and analyzed by two-way (optional: repeated) ANOVA without formally testing normality, followed by Sidak’s multiple comparison test. All significance levels are given as two-sided and were corrected for multiple comparisons, wherever applicable. The HI/LI split for pharmacological and shRNAi knockdown experiments was done by pooling the non-treatment data from both groups and performing a median split, treating the grouping as factor for subsequent statistical analysis, as described elsewhere . For all behavioral cohorts, a Grubb’s outlier test was performed at alpha = 0.0001 for the main parameters in the GNG task (precue poke rate, FA, and correct Go responses). In case of comparing distributions (i.e., precue response rate, latency to poke), a two-sample Kolmogorov-Smirnov test was used where data from all trials and all animals within a given group were pooled for analysis ( , , and ). The multivariate comparison of behavioral experiments (i.e., precue response rate and % of NG trials with FA) were assessed by multivariate ANOVA (with repeated measures if applicable) followed by ANOVA in case of significance for the given measure. Data for rs-fMRI were analyzed as reported in the Resting-state functional magnetic resonance imaging section, one-sample t-test, and two-way ANOVA results are reported in and , respectively. Omnibus significance values were rounded up for values p < 0.0001. Post hoc significance values were rounded up and given as * for values p < 0.05, ** for values p < 0.01, *** for values p < 0.001, and **** for values p < 0.0001; where no significance was made explicit, the test did not reach a significance level of p < 0.05. Unless stated otherwise, data are shown as mean ± SEM. Data exclusion For behavioral experiments, animals were excluded based on the following criteria. Note that some animals might fit more than one exclusion criteria. Histology exclusion (all sessions): Animals were removed after histological evaluation due to incorrect targeting or low viral expression. Technical exclusion (per session, i.e., laser in precue and/or cue-sessions, vehicle and/or treatment sessions): software malfunction, excessive coiling of the laser fibers or the loss of fibers during the task for optogenetic cohorts; malaise due to gavaging for the mGlu4 vehicle/PAM cohorts. Non-performance exclusion (per session): all cases of animals not reaching fixed thresholds for correct Go of 80% and FA of 45% for 3 consecutive prior days (baseline) (post-surgery, if applicable). Statistical outlier exclusion (per parameter): data points excluded for statistical power after stratification by Grubb’s outlier removal for individual parameters using conservative settings (alpha = 0.0001), followed by mixed-model ANOVAs where applicable. Applying these criteria stratified the cohorts as follows. Optogenetic cohorts: 2 out of 20 AAV::GFP, 2 out of 15 AAV::ChR2, and 6 out of 33 AAV::Arch did not reach sufficient virus expression and/or missed injection targets and were excluded. In the GNG task during laser manipulation in the precue period, from the remaining animals, 4 AAV::GFP, 2 AAV::ChR2, and 8 AAV::Arch animals were excluded for technical reasons and 1 AAV::GFP and 2 AAV::Arch animals were excluded for non-performance. In the GNG task during laser manipulation in the cue period, from the remaining animals, 3 AAV::GFP, 3 AAV::ChR2, and 7 AAV::Arch animals were excluded for technical reasons and 1 AAV::GFP, 3 AAV::ChR2, and 5 AAV::Arch animals were excluded for non-performance. mirE-cohorts: 2 out of 19 animals for mirE-control and 3 out of 20 animals for mirE-mGlu4 did not reach sufficient virus expression and/or missed injection targets and were excluded. In the GNG task, from the remaining animals, one animal from mirE-Renilla and one animal from mirE-mGlu4 were excluded for non-performance. mGlu4 PAM cohorts: In the GNG task 4 out of 31 animals from mGlu4 PAM and 2 out of 31 animals from vehicle treatment groups were excluded due to malaise. Further, from the remaining animals, one animal from the mGlu4 PAM group was excluded for non-performance in the task. Finally, two animals from the HI/precue and LI/Go parameter were excluded as statistical outlier (Grubb’s). Behavior scripts Code for assessing behavioral parameters. MATLAB and Python scripts to extract all animal behavioral data from sensor data from the cage/behavior operating software, such as precue response rate, % of correct Go responses. Used to generate main figures: ; ; – . Electrophysiology scripts Code for post-processing in vivo electrophysiological data. A set of R scripts to process and combine the electrophysiological data exported from Neuroexplorer for further analysis and visualization, with additional options to perform cluster permutation statistical tests. Used to generate main figures: – . fMRI data plus scripts Data and code for fMRI. A set of R scripts to process the time-series BOLD signal from the fMRI datasets exported from DPABI, applied to generate correlation matrices, perform one-sample t-test and ANOVAs. Used to generate main figures: , . Histology scripts Code for cell counting from histological images. A custom-made ImageJ script to evaluate in situ hybridization signals from images obtained using an automated widefield microscope. Used to generate main figures: . C57Bl6/J male mice purchased from Jackson Laboratory were used in all experiments. Animals were housed in groups of max 5 and kept under a 12:12 hr light:dark cycle with food and water ad libitum. Experiments were conducted during the light period. Water deprivation was practiced over the period of GNG behavioral testing with exception of the post-surgery recovery period. All animal experiments were performed in accordance with institutional guidelines and were approved by the respective Austrian (BGBl nr 501/1988, idF BGBl I no 162/2005) and European (Directive 86/609/EEC of November 24, 1986, European Community) authorities and covered by the license GZ2452882016/6. Overall animal history is shown in . Male wild-type mice (2–3 months of age) were deeply anesthetized with isofluorane, decapitated, and their brains quickly chilled in sucrose-based dissection buffer, bubbled with 95% O 2 /5% CO 2 containing the following (in mM): 220 sucrose, 26 NaHCO 3 , 2.4 KCl, 10 MgSO 4 , 0.5 CaCl 2 , 3 sodium pyruvate, 5 sodium ascorbate, and 10 glucose. Transverse coronal brain slices (300 µm) were cut in dissection buffer using a Vibratome (Leica, VT1000S) and immediately incubated for 15 min recovery phase in oxygenated artificial cerebrospinal fluid (aCSF) 126 NaCl, 2.5 KCl, 1.25 NaH 2 PO 4 , 26 NaHCO 3 , 2.5 CaCl 2 , 2.5 MgCl 2 , and 25 glucose in 95% O 2 /5% CO 2 at 32°C. This was followed by a slice resting phase with oxygenated aCSF for at least 45 min at room temperature (RT). Individual brain slices containing STN were placed on the stage of an upright, infrared-differential interference contrast microscope (Olympus BX50WI) mounted on a X-Y table (Olympus) and visualized with a 40× water immersion objective by an infrared sensitive digital camera (Hamamatsu, ORCA-03). Slices were fully submerged and continuously perfused at a rate of 1–2 ml/min with oxygenated aCSF. Neurons, either infected by AAV::Arch or AAV::ChR2, were identified by the presence of YFP fluorescence. Patch pipettes were pulled on a Flaming/Brown micropipette puller (Sutter, P-97) from borosilicate glass (1.5 mm outer and 0.86 mm inner diameter, Sutter) to final resistances ranging from 3 to 5 MΩ. Internal solution for voltage-clamp recordings of responses to optogenetic stimulation contained (in mM): potassium gluconate, 135; KCl, 5; HEPES, 10; MgCl 2 , 2; EGTA, 0.2; MgATP, 4; Na 3 GTP, 0.4; K 3 -phosphocreatine, 10; biocytin, 0.1; pH 7.2 (with KOH). Cells were held at –70 mV. Cells were allowed to reestablish constant activity during 5 min waiting time after breaking the seal. In case of AAV::ChR2: Increasing frequencies (5, 10, 20, 40, 80 Hz) of optogenetic pulses (473 nM) were applied to test for opto fidelity. In case of AAV::Arch, cells were subjected to an increasing ramp of 1 s depolarizing current pulses, each one accompanied by 400 ms of 20 Hz optogenetic pulses (563 nM) after 300 ms. Surgeries were performed using a Model 1900 Stereotactic Alignment Instrument (David Kopf Instruments) and a Model 1911 stereotactic drill (David Kopf Instruments). For injections, a Nanoliter 2000 injector, driven by a Micro4 MicroSyringe Pump Controller (World Precision Instruments), was used. Needles for virus injection were pulled from 3.5 nl glass capillaries (World Precision Instruments) on a Micropipette Puller (Model P-97, Sutter Instruments). The surgical protocol was adapted from : Mice were deeply anesthetized in the stereotactic frame with isofluorane (1.7%, IsoFlo, Abbot Laboratories) and anesthesia was verified by testing deep plantar reflexes. Gentamicin ointment (Refobacin 3 mg/g, Merck) was used to protect the animals’ eyes, and their body temperature was kept constant at 36°C using a heating pad. For optogenetic manipulation, one of the following viral constructs (see Key resources table) was bilaterally injected directly into the STN at –1.85 mm AP/ ±1.60 mm ML/–4.70 mm DV from bregma at a speed of 20 nl/min followed by a 5 min waiting period where the needle was kept in place in order to avoid leakage. Final injected volume was 100 nl in case of AAV::GFP and AAV::ChR2 groups, with 200 nl used for the rest of experimental groups. After the injection optic fibers (MFC_200/245-.053_5.0 mm_ZF1.25(G)_FLT from Doric Lenses) were implanted at –2.18 mm AP/±1.60 mm ML/–4.55 mm DV from bregma. For electrophysiological assessment, the animals were trained in GNG task till achieving performance and silicone electrodes (A1 × 16 poly2 50 × 375; Neuronexus) were implanted at STN (–1.85 mm AP/±1.60 mm ML/–4.70 mm DV from bregma) or SNr (–3.14 mm AP/±1.50 mm ML/–4.70 mm DV from bregma). Ground screws were mounted above the contralateral prefrontal cortex and cerebellum. All implants were fixed to the skull with dental cement (SuperBond C&B kit, Prestige Dental Products). Animals were given enrofloxacin (100 mg/ml, Baytril, Bayer Austria) and carprofen (Rimadyl, 50 mg/ml; Pfizer Austria) via drinking water for at least 7 days and were granted a resting period of at least 14 days before behavioral retraining commenced. To suppress mGlu4 expression in the STN, we constructed an AAV-based vector expressing GFP and miRNA-adapted shRNAs in the optimized miR-E backbone (under control of the SFFV promoter; AAV-SFFV-GFP-miR; ASGE), as described elsewhere . Four independent shRNAs targeting mGlu4 (guide sequences: 5’-TAGTA) were designed based on optimized design rules (see Key resources table) and cloned into miR-E and the mix was used to make viral preparation for surgical injection. To test the knockdown efficiency, mouse neuroblastoma cell line (Neuro-2a, ATCC CCL-131) were transfected with the constructs mix or control plasmid using Lipofectamine 3000 according to manufacturer’s instructions. Cells were harvested 72 hr after transfection, cell pellet was resuspended in extraction buffer consisting of 20 mM Tris–HCl (pH 7.5), 100 mM NaCl, 5 mM MgCl, 2 mM NaF, 10% glycerol, 1% NP40, 0.5 mM DTT, supplemented with protease inhibitor cocktail (Complete EDTA‐free, Roche) and lysed on ice for 5 min. Input lysates and immunoprecipitates were resuspended in SDS sample buffer and heated to 95°C for 2 min. Total protein amount was assessed using Pierce BCA Protein Assay Kit according to manufacturer’s instructions (Thermo Fisher Scientific). Pre-cast NuPAGE Novex 4–12% Bis-Tris midi gels (Invitrogen, XP04122BOX) were run in NuPAGE MOPS SDS running buffer (Thermo Fisher Scientific, NO0001). The protein samples were transferred onto a cellulose membrane using semi-wet transfer. The membranes were blotted with rabbit polyclonal antibody against mGlu4 (1:500 ab53088, Abcam) and mouse monoclonal anti-β-actin (1:5000 A5441; Sigma). The mGlu4 expression was assessed with ImageJ and normalized to β-actin signal for each sample. The protocol was based on published material . All animals were water deprived over-night to increase their motivation for a milk reward (10% condensed milk in water). The experiments were conducted in, in-house customized, Coulbourn behavioral testing boxes. For the experimental setup, a custom-built port with rectangular entrance in which the animal can insert the entire head was used. This port was equipped with a liquid delivery system and a blue/yellow (balanced) LED behind the port which can illuminate the port. Furthermore, it possessed two IR beams, first one to detect nose pokes (visits to port) and an additional one to detect licks at the liquid delivery system tube. The cage was cleaned with 70% ethanol before each mouse. Prior to GNG training, animals were habituated for one session in which the reward was dispensed in the port at variable intervals. Each reward occurrence was associated with a click sound that was then used throughout the entire behavioral experiment. The habituation session contained 60 reward deliveries without cue presentation, with house light on and separated by light off inter-trial interval (ITI) of 10 s. Each trial of the GNG task consisted of four periods. The precue period of varying duration (9–24 s) was signaled by the house light turning on. Responses during this period were recorded but not reinforced. The variable timing was used to avoid predictive behavior toward the presentation of the cue. Any port visit during the last 3 s of the precue period terminated the trial (jump to ITI) in order to prevent false positive/false negative response counts upon cue presentation. Cue presentation followed the successful passing of the precue period and consisted of an auditory cue. The sounds chosen were a pulsed white noise for the Go trials and a 3 kHz pure tone for the NG trials, both set at 75 db intensity. The first visit to the response port, during the cue presentation, ended the cue and led to: (1) the reward delivery period (3 s), in case of a correct Go response; or (2) a jump to the ITI without reward delivery if a poke was detected during NG cue presentation (named false alarm, FA). Conversely, a successful NG trial was achieved by withholding poke responses in the port during the entire NG cue presentation period and resulted in reward delivery after cue end. In both correct responses, the reward was coupled with a ‘click’ sound and an LED light in the port and consisted of a 20 μl 10% milk solution. Licks were recorded for the duration of the entire trial. Each trial ended with a 10 s ITI during which the house light was off. Responses during this period were recorded but had no consequences. Following habituation, animals started the first training phase of the GNG task. Here, only Go trials were used, where an animal had to deliver a port response during the 30 s Go cue presentation. Training occurred daily and consisted of 60 Go trials in each session or 40 min, whichever came first. After the animals reached a correct response rate of at least 60% and their performance was stable over the course of three consecutive sessions, the next training phase started. In the second training phase, animals were introduced to the NG cue and cue presentation time was set to 10 s for both Go and NG trial types. Each session consistent in total of 60 trials (30× Go and 30× NG, random order) or 40 min, whichever came first. Animals were trained until they reached a correct response rate of at least 80% paired with an FA rate of maximum 45% and their performance was stable over the course of three consecutive sessions. Animals that reached these conditions further underwent stereotaxic surgery. After the post-operative rest period, the mice were retrained until they reached the performance criteria for the optogenetic and pharmacological experiments. For the shRNA experiment, animals were tested every second day for a total of seven sessions. Additionally, animals which on a non-treatment day performed less than 80% correct Go responses and/or made more than 60% of FA during NG trials were excluded. Passing these criteria, animals were assigned to HI/LI groups based on 25th/75th percentile or median splits. This yielded individually consistent categorization of HI/LI animals, which was comparable across experiments . Behavioral data was processed using MATLAB programs (R2015b, MathWorks, Natick, MA), videos were analyzed in Ethovision XT 8 (Noldus Information Technology, Wageningen, The Netherlands), both data types were then merged and processed using custom-made Python scripts (Python 3.3) with the final analysis being done in GraphPad Prism (Version 7). Before each experiment, mice were allowed to habituate to the experimental room for at least 30 min prior to any testing. The optogenetic manipulation was done with three 2 min laser off periods alternated by 2 min laser on periods in between (for laser set up, see Optogenetic manipulation). In case of mGlu4 PAM evaluation, the animals were first administered the drug or vehicle in a latin square design (see Pharmacology) and after a 30 min waiting period were placed in the arena and allowed to explore for 30 min. In both cases a 50 cm (width) × 50 cm (length) × 29.5 cm (height) arena was used, and video tracked using AnyMaze software (Stoelting). In the software, a ‘center’ zone was defined as a central square 25 cm × 25 cm in size, the rest being the ‘border zone’ . The open-field evaluation of shRNA knockdown for mGlu4 was done at the preclinical phenotyping facility of the Vienna Biocenter Core Facilities GmbH (VBCF). The animals were transferred to the facility 1 week prior to experiments and housed at a 14 hr light–10 hr dark cycle in IVC racks with access to food and water ad libitum. After placing in the open-field arena (50 cm (width) × 50 cm (length) × 29.5 cm (height)), mice were allowed to explore for 30 min and were video tracked using TSE VideoMot 3D Version 7.01 software ( https://www.tse-systems.com ). In the software, a ‘center’ zone was defined as a central square 25 cm × 25 cm in size, the rest being the ‘border zone’. In all cases light conditions were about 300 lux in the center zone. After each trial, the apparatus was cleaned with water and 70% ethanol. Open-field experiments were performed in the morning (10:00 am–01:00 pm). The time spent in each zone, distance traveled, and number of center visits were recorded as readout parameters. 4-(( E )-styryl)-pyrimidin-2-ylamine (mGlu4 PAM, Cmp 11) was synthesized at Boehringer Ingelheim, Germany. The compound was dissolved in one volume of 0.1% Tween-80 (v/v) and nine volumes of 0.5% Natrosol and administered orally at 10 ml/kg, for a final dose of 80 mg/kg, 30 min before testing. The dose was chosen based on previous studies . After the surgery and recovery period, animals were handled and habituated to the recording room for several days prior experimental recordings. Electrodes were connected via an Omnetics connector to a 16-channel unity-gain headstage (Plexon) and the animal was left in the homecage for 10 min thereafter. The headstage was connected to a pre-amplifier where the signal was band-pass filtered (3 Hz – 8 kHz) and amplified. Neural activity was digitized at 40 kHz and highpass-filtered for spikes (800 Hz) and LFPs (3–200 Hz) for offline analysis. All recording sessions per mouse were merged. In general, three no treatment sessions, two vehicle, and two mGlu4 PAM sessions were concatenated together and split accordingly to type after unit sorting. Single units were sorted manually with Offline Sorter v4 (OFS, Plexon) in 3D PC feature space on unsorted waveforms and declared a single unit if the spike cluster was separable from noise and other clusters and no refractory period infringements were present. To avoid multi-sampling of single units, cross-correlograms of units from adjacent channels were inspected for co-firing and respective units removed from analysis. Simultaneously to neuronal data acquisition, animal behavior was recorded using both the sensor state (as described in the GNG task section) and image, with a video camera located on top of the cage, using CinePlex Studio. Video analysis and synchronization with neuronal recordings was then performed using CinePlex Editor. For the extraction of the immobility episodes, the motion measure data was averaged with the moving average window of 2 s, with the minimum duration of an immobility state set at 1 s (threshold 80). Episodes less than 0.5 s apart were merged. Finally, Neuroexplorer 5 was used to reconstruct the behavioral paradigm and treatment sessions. Unit activity is reported either as frequency (Hz) or normalized activity (Z-score transformation). Units with less than 20 spikes across the entire sessions were removed from further analysis. For event-related analysis (i.e., poke onset), the activity of each unit was transformed to Z-scores using the mean and SD of session-wide firing rate (250 ms bin windows) and moving-window smoothened using 1.5 s Gaussian. A unit was considered to be event-related if its activity in any of the post-onset bins (0 s to +1 s) was significantly different from mean baseline (–2 s to –1 s) activity across all trials using one-sample t-test. To show significant change of population firing from session-wide mean, a two-sided one-sample t-test against 0 for each time bin was used and was further corrected for multiple comparisons using cluster-based permutation testing on contiguously significant bins using 5000 iterations . In order to test if given responsive population on given event reaches significance, the null distribution of number of responsive units was built by performing 5000 iterations of randomly shifting the event onsets and performing the even-related analysis as previously described. The observed number of units was then compared to the obtained shuffled distribution and was considered significant if the observed number lied within the top 10% of the distribution. Low-dimensional representations for visualizing changing of population dynamics over time were constructed using PCA on trial averaged unit-based peri-event time course for the entire population using selected features (shown on the particular graph) and time window between –5 s and 5 s . Three mice were implanted for STN. Each animal was recorded for two to four independent non-treatment sessions resulting in a total of 74 units (average eight units per session), two sessions under vehicle resulting in a total of 27 units (average five units per session) and three sessions under mGlu4 PAM resulting in a total of 34 units (average of four units per session). Session-averaged precue response rates for each animal used for the STN electrophysiological analysis are presented in . Two animals were implanted for SNr. Each animal was recorded for four independent non-treatment sessions resulting in a total of 41 units (average five units per session), one session under vehicle resulting in a total of 11 units (average five units per session) and one session under mGlu4 PAM resulting in a total of seven units (average of three units per session). Animals injected with optogenetic AAV for later neuronal modulation during behavior underwent habituation for attaching a fiber-optic patch cord (Doric lenses) onto the implanted optical fibers. For ChR2 activation, laser trains of blue light (473 nm) consisting of 20 ms pulses with a frequency of 20 Hz (if not noted otherwise) were delivered at an intensity of 1–1.5 mW at the fiber tip, unless stated otherwise. For Arch-mediated silencing, laser trains of constant yellow light (568 nm) were delivered at an intensity of 8–10 mW. Intensity of all laser stimulations was measured before every experiment at the tip of the optic fiber via Power Meter (Thorlabs, PM100D). Laser stimulation was controlled by MATLAB scripts during GNG experiments and by Arduino boards running customized scripts executed by Any-maze software (Stoelting) during open-field test. 4-(( E )-styryl)-pyrimidin-2-ylamine (mGlu4 PAM, Cmp 11) was synthesized at Boehringer Ingelheim, Germany. The compound was dissolved in one volume of 0.1% Tween-80 (v/v) and nine volumes of 0.5% Natrosol and administered orally at 10 ml/kg, for a final dose of 80 mg/kg, 30 min before testing. The dose was chosen based on previous studies . Animals were administered vehicle or 80 mg/kg mGlu4 PAM (see Pharmacology) and were left undisturbed for 30 min prior any MRI measurements. MRI was performed on a 15.2T Bruker system (Bruker BioSpec, Ettlingen Germany) with a four-channel phase array coil for mouse heads (Bruker, Biospec). Prior to imaging all mice were anesthetized with 4% isoflurane, while care was taken to adjust the isoflurane levels immediately so that respiration did not go below 100 beats per minute (bpm) at any time. During imaging, respiration was kept between 110 and 140 bpm. For the rs-fMRI study, single shot echo planar imaging sequence with spin echo readout was used (TR = 2000 ms, TE = 19.7 ms, FOV = 16 × 16 mm 2 , voxel size = 250 × 250 µm 2 , 30 slices 0.5 mm thick, 1 average, 240 repetitions, 8 min total imaging time). Following resting-state scan, a high-resolution T1-weighted anatomical scan was acquired using gradient echo sequence (TR = 500 ms, TE = 3 ms, FOV = 16 × 16 mm 2 , voxel size = 125 × 125 µm 2 , 30 slices 0.5 mm thick, 4 averages). rs-fMRI data were first bias-field corrected using N4ITK algorithm . Pre-processing was done using the Data Processing Assistant for Resting-state fMRI Advanced Edition (DPARSF-A) toolbox, which is part of the Data Processing and Analysis of Brain Imaging (DPABI) toolbox Version 3.1 ( http://rfmri.org/dpabi , ). The first 10 volumes were removed from each resting-state dataset. Data were processed in series of steps that included slice-timing correction, realignment, co-registration, normalization, and segmentation using in-house created mouse masks for cerebrospinal fluid (CSF), white matter (WM), and gray matter (GM). Nuisance covariates related to the motion were regressed out using the Friston 24-parameter model . In addition, WM and CSF mean time-series were used as nuisance regressors in the general linear model to reduce the influence of physiological noise . Global signal regression was used . Data were smoothed spatially with a 2.4 pixel full-width half-maximum Gaussian kernel. All data were co-registered to the in-house generated mouse atlas (with 102 distinct brain regions). Data were corrected for multiple comparisons using Gaussian random field theory multiple comparison correction (voxel-level p-value = 0.05, cluster-level p-value = 0.05). Functional connectivity (FC) patterns were compared between each group, by extracting the mean time-series BOLD signal of each of 102 brain regions. In order to investigate the general effect of impulsivity across all annotated brain regions, mean FC correlations were calculated for each animal under vehicle treatment, and the mean correlation matrices for each impulsivity group were subtracted from each other. Next, for each brain node, a one-sample t-test was performed against 0 if no difference on impulsivity levels, using Bonferroni for multiple correction. To assess the interaction effect between mGlu4 PAM treatment and impulsivity levels, FC correlation matrices were calculated for each animal under vehicle or compound treatment as before, followed, by two-way repeated-measures ANOVA on each of the respective brain node, using Bonferroni for multiple correction. To verify virus expression (see Key resources table for viral constructs) and correct locations of optical fiber tips and cannulae, animals were sacrificed using a mixture of 10 mg/ml ketamine (OGRIS Pharma) and 1 mg/ml medetomidine hydrochloride (Domitor, ORION Pharma) in 1× PBS and transcardially perfused with 40 ml of cold 1× PBS/heparine followed by 40 ml of 4% (PFA) in 1× PBS. Brains were immediately removed, post-fixed in 4% PFA at 4°C overnight, transferred to 30% sucrose solution for 24 hr and subsequently frozen in Tissue Tech O.C.T. on dry ice and stored at –80°C until sectioning. Coronal cryosections were cut at 20 µm thickness. Selected sections were counterstained with DAPI and mounted in Fluorescence Mounting Medium (Dako, S3023). Whole slides were then scanned using an automated widefield microscope (Pannoramic 250 Flash, 3D HISTECH Ltd.). Once images were acquired, regions of interest (ROIs) were marked by hand within Pannoramic Viewer (3D HISTECH Ltd). Expression of viral constructs and location of optical fiber tips/cannulae were then assessed for correct targeting . Viral expression was quantified using a custom ImageJ script, using an overlay of the appropriate sections from all animals within the experimental group and averaging the signal. The mGlu4 expression analysis was done on cryosections with tissue prepared as described for the histological analysis extracted from GNG trained animals. The multiplexed in situ hybridization staining was done using the RNAscope system (RNAscope Multiplex Fluorescent v2 kit Cat no. 323100, Advanced Cell Diagnostics) according to manufacturer’s protocol. The tissue was co-hybridized with proprietary probes against Grm4 (Cat no. 480991), probe mix against Slc17a6+ Slc17 a7 (Cat no. 416631 and 319171) followed by differential fluorescence tagging (TSA Cy 3, 5, TMR Fluorescein Evaluation kit Cat no. NEL 760001KT, Perkin Elmer). Slides were imaged as described in the Histological analysis section. Images were processed using ImageJ, quantifying the total number of mGlu4 foci, and normalizing each ROI to the total number of detected nuclei using DAPI staining. For each brain region, ROIs at least two sections were selected for left and right hemispheres, averaged, and treated as independent samples. mGlu4 expression quantification was performed on these ROIs by automated analyses based on custom ImageJ scripts. The signal for each ROI was normalized to the average signal in the LI group for regional analyses. For cellular analyses, perisomatic cellular signals and cell type (VGlut) were identified using custom scripts. Cells were defined as VGlut + if their perisomatic signal was above the ROI section average for a conservative classification of VGlut + glutamatergic projection neurons. Sample sizes are in line with estimates derived from previous experiments, using G*Power Version 3.1.9.6. For behavioral experiments, target sample size was in the range of 6–10 animals (effect size 0.45, ). Animals were assigned randomly to all experimental cohorts. The behavioral experimenter was blind to the treatment of respective groups, wherever possible. All behavioral and data analyses were carried out blinded and using automated computational pipelines, wherever applicable. Establishing of the behavioral assay, neural recordings, and circuit manipulation were performed in independent experiments, with separate animal cohorts, wherever applicable. Basic behavior was replicated across experiments for control groups. Optogenetic manipulations were replicated on separate experiments and cohorts (biological replicates). All behavioral statistics were performed in GraphPad Prism (Version 7), unless otherwise indicated, and all statistical tests used are indicated in the figure legends. Experimental designs with one categorical independent variable were assessed by Shapiro-Wilk normality tests. If normality test passed, parametric statistics (t-test, one-way ANOVA) were applied. In case of non-normal distributions, non-parametric statics (Mann-Whitney U-test and Wilcoxon signed rank test) were planned. Experimental designs with two categorical independent variables were assumed to be normal and analyzed by two-way (optional: repeated) ANOVA without formally testing normality, followed by Sidak’s multiple comparison test. All significance levels are given as two-sided and were corrected for multiple comparisons, wherever applicable. The HI/LI split for pharmacological and shRNAi knockdown experiments was done by pooling the non-treatment data from both groups and performing a median split, treating the grouping as factor for subsequent statistical analysis, as described elsewhere . For all behavioral cohorts, a Grubb’s outlier test was performed at alpha = 0.0001 for the main parameters in the GNG task (precue poke rate, FA, and correct Go responses). In case of comparing distributions (i.e., precue response rate, latency to poke), a two-sample Kolmogorov-Smirnov test was used where data from all trials and all animals within a given group were pooled for analysis ( , , and ). The multivariate comparison of behavioral experiments (i.e., precue response rate and % of NG trials with FA) were assessed by multivariate ANOVA (with repeated measures if applicable) followed by ANOVA in case of significance for the given measure. Data for rs-fMRI were analyzed as reported in the Resting-state functional magnetic resonance imaging section, one-sample t-test, and two-way ANOVA results are reported in and , respectively. Omnibus significance values were rounded up for values p < 0.0001. Post hoc significance values were rounded up and given as * for values p < 0.05, ** for values p < 0.01, *** for values p < 0.001, and **** for values p < 0.0001; where no significance was made explicit, the test did not reach a significance level of p < 0.05. Unless stated otherwise, data are shown as mean ± SEM. For behavioral experiments, animals were excluded based on the following criteria. Note that some animals might fit more than one exclusion criteria. Histology exclusion (all sessions): Animals were removed after histological evaluation due to incorrect targeting or low viral expression. Technical exclusion (per session, i.e., laser in precue and/or cue-sessions, vehicle and/or treatment sessions): software malfunction, excessive coiling of the laser fibers or the loss of fibers during the task for optogenetic cohorts; malaise due to gavaging for the mGlu4 vehicle/PAM cohorts. Non-performance exclusion (per session): all cases of animals not reaching fixed thresholds for correct Go of 80% and FA of 45% for 3 consecutive prior days (baseline) (post-surgery, if applicable). Statistical outlier exclusion (per parameter): data points excluded for statistical power after stratification by Grubb’s outlier removal for individual parameters using conservative settings (alpha = 0.0001), followed by mixed-model ANOVAs where applicable. Applying these criteria stratified the cohorts as follows. Optogenetic cohorts: 2 out of 20 AAV::GFP, 2 out of 15 AAV::ChR2, and 6 out of 33 AAV::Arch did not reach sufficient virus expression and/or missed injection targets and were excluded. In the GNG task during laser manipulation in the precue period, from the remaining animals, 4 AAV::GFP, 2 AAV::ChR2, and 8 AAV::Arch animals were excluded for technical reasons and 1 AAV::GFP and 2 AAV::Arch animals were excluded for non-performance. In the GNG task during laser manipulation in the cue period, from the remaining animals, 3 AAV::GFP, 3 AAV::ChR2, and 7 AAV::Arch animals were excluded for technical reasons and 1 AAV::GFP, 3 AAV::ChR2, and 5 AAV::Arch animals were excluded for non-performance. mirE-cohorts: 2 out of 19 animals for mirE-control and 3 out of 20 animals for mirE-mGlu4 did not reach sufficient virus expression and/or missed injection targets and were excluded. In the GNG task, from the remaining animals, one animal from mirE-Renilla and one animal from mirE-mGlu4 were excluded for non-performance. mGlu4 PAM cohorts: In the GNG task 4 out of 31 animals from mGlu4 PAM and 2 out of 31 animals from vehicle treatment groups were excluded due to malaise. Further, from the remaining animals, one animal from the mGlu4 PAM group was excluded for non-performance in the task. Finally, two animals from the HI/precue and LI/Go parameter were excluded as statistical outlier (Grubb’s). Behavior scripts Code for assessing behavioral parameters. MATLAB and Python scripts to extract all animal behavioral data from sensor data from the cage/behavior operating software, such as precue response rate, % of correct Go responses. Used to generate main figures: ; ; – . Electrophysiology scripts Code for post-processing in vivo electrophysiological data. A set of R scripts to process and combine the electrophysiological data exported from Neuroexplorer for further analysis and visualization, with additional options to perform cluster permutation statistical tests. Used to generate main figures: – . fMRI data plus scripts Data and code for fMRI. A set of R scripts to process the time-series BOLD signal from the fMRI datasets exported from DPABI, applied to generate correlation matrices, perform one-sample t-test and ANOVAs. Used to generate main figures: , . Histology scripts Code for cell counting from histological images. A custom-made ImageJ script to evaluate in situ hybridization signals from images obtained using an automated widefield microscope. Used to generate main figures: . Code for assessing behavioral parameters. MATLAB and Python scripts to extract all animal behavioral data from sensor data from the cage/behavior operating software, such as precue response rate, % of correct Go responses. Used to generate main figures: ; ; – . Code for post-processing in vivo electrophysiological data. A set of R scripts to process and combine the electrophysiological data exported from Neuroexplorer for further analysis and visualization, with additional options to perform cluster permutation statistical tests. Used to generate main figures: – . Data and code for fMRI. A set of R scripts to process the time-series BOLD signal from the fMRI datasets exported from DPABI, applied to generate correlation matrices, perform one-sample t-test and ANOVAs. Used to generate main figures: , . Code for cell counting from histological images. A custom-made ImageJ script to evaluate in situ hybridization signals from images obtained using an automated widefield microscope. Used to generate main figures: .
Predicting postoperative nausea and vomiting after cesarean section: a nomogram model combined with gastric ultrasound
272e9cf6-dd96-4a1a-ad2b-7e9423a78ebe
11806823
Surgical Procedures, Operative[mh]
Postoperative nausea and vomiting (PONV) is a common adverse reaction following a caesarean section. Griffith et al. reported that the incidence of PONV varies from 21 to 79% . Numerous studies have demonstrated that PONV after a caesarean section not only increases the risk of postoperative bleeding and wound rupture but also affects the postpartum recovery of pregnant and breastfeeding women . Additionally, it can elevate the risk of postpartum depression . Therefore, caesarean section guidelines also prioritize the prevention of PONV as a key aspect of ensuring rapid postoperative recovery . Current PONV prevention strategies mainly include the prophylactic use of antiemetic drugs, multimodal analgesia, and the perioperative administration of dexmedetomidine . However, most current interventions carry potential risks for pregnant women and may impact newborns either through placental transplantation or during breastfeeding . Therefore, it becomes crucial to perform individualized PONV risk assessments in pregnant women before a caesarean section. The Apfel score is presently employed to assess patient susceptibility to PONV based on risk factors such as female sex, non-smoking status, opioid use, previous history of PONV, and motion sickness . However, as all caesarean section recipients are female and most of them avoid smoking for the health of the fetus, the Apfel score may have limitations in predicting PONV occurrence in this population . Furthermore, in addition to common anesthetic and surgical factors and the physiological changes occurring in pregnant women, such as increased gastric pressure due to the upward displacement of the uterus during pregnancy , Hong et al. showed that the increase in gastric volume in pregnant women compared to non-pregnant women is a significant factor contributing to the elevated risk of PONV . Gastric ultrasound, a non-invasive method for evaluating gastric contents, has been extensively studied in the field of anesthesiology . Chen et al. found that the gastric content and volume in pregnant women can be calculated according to the cross-sectional area (CSA) of the gastric antrum . Therefore, the objective of this study was to combine gastric ultrasound assessments of pregnant women with a comprehensive collection of PONV-related risk factors for caesarean sections. This would enable us to establish a caesarean section-specific PONV prediction model, offering valuable insights for clinical prevention and intervention. Patients This prospective observational cohort study was approved by the Research Ethics Committee of the Second Affiliated Hospital of Fujian Medical University (2022 Ethics Review No.285). This study was conducted from August 2022 to February 2023, and the data of 116 adult patients who underwent caesarean sections were continuously collected. All participants gave written informed consent to take part in the study. The studies were performed in accordance with relevant guidelines/regulations and conducted following the Helsinki Declaration and good clinical practice. The inclusion criteria were as follows: (1) adult women who underwent caesarean section delivery under spinal anesthesia; (2) American Society of Anesthesiologists (ASA) classification: I-III; and (3) single pregnancy ≥ 36 weeks. The exclusion criteria were as follows: (1) height < 152 cm or height > 180 cm; (2) patients who received antiemetic medication within 24 h before caesarean section; (3) patients requiring emergency caesarean section; (4) pregnant women with severe hypertension during pregnancy (systolic blood pressure > 160 mmHg, diastolic blood pressure > 110 mmHg), heart disease, and diabetes; (5) patients treated with monoamine oxidase inhibitors or tricyclic antidepressants; and (6) morbid obesity (body mass index > 45 kg/m2). Preoperative ultrasound assessment of gastric volume Prior to surgery, ultrasonography was performed in the preparation room using a Mindray M6 ultrasonic diagnostic instrument equipped with a 3C5S convex array probe set to a frequency of 2–5 MHz. The pregnant woman was asked to assume the right position and the head of the bed was elevated to 45°. The probe was placed in the subxiphoid region, utilising the abdominal aorta and left lobe of the liver as localisation markers to obtain the sagittal plane of the scan . The gastric antrum was explored through a sagittal section (Fig. ), and the maximum anteroposterior diameter (AP) and craniocaudal diameter (CC) of the antrum were measured three consecutive times . The average of each diameter was used to calculate the area of the gastric antrum area (CSA), [12pt]{minimal} $$\:CSA(c{m}^{2})=(AP*CC*\:{\:})/\:4\:$$ . The gastric volume was calculated by measuring the CSA of the antrum according to the following formula : [12pt]{minimal} $$&\:Gastric\:volume(mL)&=270.76+13.68*CSA-1.20*gestational\:age\:$$ Intraoperative management The patient was placed in the supine position before the operation, and the patient ‘s blood pressure was measured every 5 minutes for three consecutive times without interference. The baseline systolic blood pressure of the patient was determined by calculating the average of the three systolic blood pressures. Spinal-epidural anesthesia involved puncturing the L2/L3 space and injecting 15 mg 0.75% ropivacaine hydrochloride into the subarachnoid space. The spinal needle was then removed, and a 3–4 cm catheter was inserted cephalad through the epidural puncture needle. Subsequently, the patient was immediately adjusted to the supine position, with appropriate left tilt and adjustment of the anesthesia plane to the T6–T8 range. If the patient complained of pain for more than 1 min after successful delivery of the fetus, 2 mg of morphine was injected into the epidural space. Intravenous bolus of 8 µg norepinephrine was given immediately after spinal anesthesia to maintain systolic blood pressure changes within 20% of the baseline. A systolic blood pressure fall of greater than 20% of the baseline or less than 90 mmHg was judged to be hypotension. Therefore, an additional 5 µg of norepinephrine was required, with the blood pressure monitored every 2 min, and medication was repeated accordingly until the systolic blood pressure was maintained in the expected range. If the heart rate was < 50 beats/min, 0.5 mg atropine was administered, and an additional dose were given as required. A rapid fluid infusion of 500 ml of colloidal solution immediately after spinal anesthesia and before delivery was also administered, subsequently maintaining the infusion rate. After delivery via caesarean section, 10 U of ordinary oxytocin was injected into the uterus, and 100 µg (1 ml) of carbetocin was slowly intravenously injected within 1 min. Intraoperative maternal nausea or vomiting was treated with 10 mg dexamethasone. Postoperative analgesia was performed using a postoperative analgesia pump (20 mg dezocine + 0.9% normal saline to 100 ml) and a patient-controlled intravenous analgesia mode. Data collection The primary outcome measure was the incidence of PONV within 24 h after surgery, as determined during follow-up on the second day after surgery. Postoperative vomiting data were collected through follow-up with patients, patients' families, and nurses. Postoperative nausea data were collected using the visual analogue scale (VAS) . A VAS score of 0–1 was considered as no nausea, and a score above 1 was considered indicative of nausea. Data on the potential risk factors for PONV were obtained from patients before surgery, the electronic medical record system of our hospital, and data recorded during surgery. Gestational age, age, hyperemesis gravidarum, PONV history, motion sickness history, smoking history, migraine history, body mass index, gastric volume, operation time, anesthesia time, intraoperative dexamethasone, intraoperative morphine, and systolic blood pressure fall > 20% were recorded. Statistical analysis SPSS 26.0.0.0 and R 4.2.2 were used for statistical analysis. A total of 116 patients were divided into the training and validation groups in a 3:1 ratio using a random number table. Categorical variables are expressed as an example , and measurement data are expressed as mean ± standard deviation (xˉ ± s). Categorical variables were compared using the χ² test, and measurement data were compared using an independent two-tailed t test. In all analyses, statistical significance was set at P < 0.05. The significance of each variable in the training group in the univariate analysis was included in the multivariate logistic regression analysis model to identify the independent risk factors associated with the occurrence of PONV. The “rms” package of R, version 4.2.2, was used to construct a nomogram prediction model. Internal verification of the nomogram was carried out via the random split verification method. The ROC curve was analyzed to calculate the optimal cut-off value, which was determined by maximizing the Youden index (sensitivity + specificity– 1). The accuracy of the optimal cut-off value was evaluated for sensitivity, specificity, predictive value, and likelihood ratio, and calibration and decision curves were drawn to further supplement the accuracy, predictive efficacy, and clinical practicability of the evaluation model. Using the test data set, the nomogram and Apfel scoring models were externally verified, and the ROC, clinical decision, and clinical impact curves were used to compare the predictive performances of the two models from multiple perspectives. DeLong’s test was used to compare the models in terms of significant differences in their ROC curves for each data set. This prospective observational cohort study was approved by the Research Ethics Committee of the Second Affiliated Hospital of Fujian Medical University (2022 Ethics Review No.285). This study was conducted from August 2022 to February 2023, and the data of 116 adult patients who underwent caesarean sections were continuously collected. All participants gave written informed consent to take part in the study. The studies were performed in accordance with relevant guidelines/regulations and conducted following the Helsinki Declaration and good clinical practice. The inclusion criteria were as follows: (1) adult women who underwent caesarean section delivery under spinal anesthesia; (2) American Society of Anesthesiologists (ASA) classification: I-III; and (3) single pregnancy ≥ 36 weeks. The exclusion criteria were as follows: (1) height < 152 cm or height > 180 cm; (2) patients who received antiemetic medication within 24 h before caesarean section; (3) patients requiring emergency caesarean section; (4) pregnant women with severe hypertension during pregnancy (systolic blood pressure > 160 mmHg, diastolic blood pressure > 110 mmHg), heart disease, and diabetes; (5) patients treated with monoamine oxidase inhibitors or tricyclic antidepressants; and (6) morbid obesity (body mass index > 45 kg/m2). Prior to surgery, ultrasonography was performed in the preparation room using a Mindray M6 ultrasonic diagnostic instrument equipped with a 3C5S convex array probe set to a frequency of 2–5 MHz. The pregnant woman was asked to assume the right position and the head of the bed was elevated to 45°. The probe was placed in the subxiphoid region, utilising the abdominal aorta and left lobe of the liver as localisation markers to obtain the sagittal plane of the scan . The gastric antrum was explored through a sagittal section (Fig. ), and the maximum anteroposterior diameter (AP) and craniocaudal diameter (CC) of the antrum were measured three consecutive times . The average of each diameter was used to calculate the area of the gastric antrum area (CSA), [12pt]{minimal} $$\:CSA(c{m}^{2})=(AP*CC*\:{\:})/\:4\:$$ . The gastric volume was calculated by measuring the CSA of the antrum according to the following formula : [12pt]{minimal} $$&\:Gastric\:volume(mL)&=270.76+13.68*CSA-1.20*gestational\:age\:$$ The patient was placed in the supine position before the operation, and the patient ‘s blood pressure was measured every 5 minutes for three consecutive times without interference. The baseline systolic blood pressure of the patient was determined by calculating the average of the three systolic blood pressures. Spinal-epidural anesthesia involved puncturing the L2/L3 space and injecting 15 mg 0.75% ropivacaine hydrochloride into the subarachnoid space. The spinal needle was then removed, and a 3–4 cm catheter was inserted cephalad through the epidural puncture needle. Subsequently, the patient was immediately adjusted to the supine position, with appropriate left tilt and adjustment of the anesthesia plane to the T6–T8 range. If the patient complained of pain for more than 1 min after successful delivery of the fetus, 2 mg of morphine was injected into the epidural space. Intravenous bolus of 8 µg norepinephrine was given immediately after spinal anesthesia to maintain systolic blood pressure changes within 20% of the baseline. A systolic blood pressure fall of greater than 20% of the baseline or less than 90 mmHg was judged to be hypotension. Therefore, an additional 5 µg of norepinephrine was required, with the blood pressure monitored every 2 min, and medication was repeated accordingly until the systolic blood pressure was maintained in the expected range. If the heart rate was < 50 beats/min, 0.5 mg atropine was administered, and an additional dose were given as required. A rapid fluid infusion of 500 ml of colloidal solution immediately after spinal anesthesia and before delivery was also administered, subsequently maintaining the infusion rate. After delivery via caesarean section, 10 U of ordinary oxytocin was injected into the uterus, and 100 µg (1 ml) of carbetocin was slowly intravenously injected within 1 min. Intraoperative maternal nausea or vomiting was treated with 10 mg dexamethasone. Postoperative analgesia was performed using a postoperative analgesia pump (20 mg dezocine + 0.9% normal saline to 100 ml) and a patient-controlled intravenous analgesia mode. The primary outcome measure was the incidence of PONV within 24 h after surgery, as determined during follow-up on the second day after surgery. Postoperative vomiting data were collected through follow-up with patients, patients' families, and nurses. Postoperative nausea data were collected using the visual analogue scale (VAS) . A VAS score of 0–1 was considered as no nausea, and a score above 1 was considered indicative of nausea. Data on the potential risk factors for PONV were obtained from patients before surgery, the electronic medical record system of our hospital, and data recorded during surgery. Gestational age, age, hyperemesis gravidarum, PONV history, motion sickness history, smoking history, migraine history, body mass index, gastric volume, operation time, anesthesia time, intraoperative dexamethasone, intraoperative morphine, and systolic blood pressure fall > 20% were recorded. SPSS 26.0.0.0 and R 4.2.2 were used for statistical analysis. A total of 116 patients were divided into the training and validation groups in a 3:1 ratio using a random number table. Categorical variables are expressed as an example , and measurement data are expressed as mean ± standard deviation (xˉ ± s). Categorical variables were compared using the χ² test, and measurement data were compared using an independent two-tailed t test. In all analyses, statistical significance was set at P < 0.05. The significance of each variable in the training group in the univariate analysis was included in the multivariate logistic regression analysis model to identify the independent risk factors associated with the occurrence of PONV. The “rms” package of R, version 4.2.2, was used to construct a nomogram prediction model. Internal verification of the nomogram was carried out via the random split verification method. The ROC curve was analyzed to calculate the optimal cut-off value, which was determined by maximizing the Youden index (sensitivity + specificity– 1). The accuracy of the optimal cut-off value was evaluated for sensitivity, specificity, predictive value, and likelihood ratio, and calibration and decision curves were drawn to further supplement the accuracy, predictive efficacy, and clinical practicability of the evaluation model. Using the test data set, the nomogram and Apfel scoring models were externally verified, and the ROC, clinical decision, and clinical impact curves were used to compare the predictive performances of the two models from multiple perspectives. DeLong’s test was used to compare the models in terms of significant differences in their ROC curves for each data set. Baseline characteristics A total of 116 adult patients who underwent Cesarean section procedures met the inclusion criteria. Of these, 87 formed the training group, and 24 (27.59%) of these experienced PONV following their procedures. The validation group comprised 29 patients, with 8 (27.59%) who had developed PONV. The test set included another 29 participants, of whom 12 (41.38%) had experienced PONV (Table ). The difference between the three groups was not statistically significant ( P > 0.05). Analysis of PONV risk factors The results of the univariate analysis showed that a history of motion sickness, gastric volume, and a systolic blood pressure fall > 20% were risk factors for PONV ( P < 0.05, Table ). These three variables were subsequently included in the logistic multivariate regression model, revealing that a history of motion sickness, gastric volume, and a systolic blood pressure fall > 20% were independent risk factors for PONV ( P < 0.05; Table ). Establishment and evaluation of the PONV nomogram Based on the logistic analysis results, data from the three aforementioned predictors were input into the R 4.2.2 software to construct a nomogram model for predicting PONV following a caesarean section (Fig. ). Efficiency evaluation of the nomogram model Predicting Performance The model was divided into training and internal verification sets using a 3:1 random split of the data. The AUC of the training and validation nomogram models were 0.814 (95% CI = 0.709–0.918) and 0.792 (95% CI = 0.621–0.962), respectively, indicating excellent prediction performance (Fig. ). The Hosmer and Lemeshow goodness-of-fit test coefficient of the nomogram model was 0.076. The best diagnostic probability, sensitivity, and specificity of the training set were 0.333, 0.792, and 0.841, respectively. The optimal diagnostic probability, sensitivity, and specificity in the validation set were 0.099, 0.875, and 0.667, respectively (Table ). Calibration degree In this study, calibration curves were used to evaluate the degree of fit between the predicted probability and the actual probability of PONV occurrence in the training and validation sets, using the nomogram (Fig. ). Its calibration curve showed that the predicted and actual probabilities were in good agreement between the training and verification sets. Clinical decision curve analysis DCA curve analysis showed that when the threshold probabilities of the training set and the validation set were 8.9-61.4% and 12.1-67.2%, respectively, the prediction of PONV occurrence probability after cesarean section based on the nomogram had more clinical benefits than that of all patients receiving targeted interventions or none receiving targeted interventions, indicating its strong clinical practicability (Fig. ). Clinical impact curve analysis The clinical impact curve was used to assess the ability of the nomogram model to identify patients at high risk of developing PONV (Fig. ). Diagnostic efficacy comparison: nomogram model vs. Apfel model We used clinical data from 29 adult patients who underwent Cesarean section procedures at another hospital as a test set. The test set was used to externally validate both the nomogram and Apfel models, as well as compare their levels of diagnostic efficacy. In the test set, the AUCs of the nomogram and Apfel models were 0.779 (95% CI = 0.593–0.965) and 0.547 (95% CI = 0.350–0.745), respectively (Fig. ). Delong’s test showed that the AUC of the nomogram model was significantly higher than that of the Apfel one (Z = 2.165, P < 0.05). In the test set, the highest diagnostic probability, sensitivity, and specificity measurement for the nomogram model were 0.449, 0.700, and 0.842, respectively, while the corresponding scores for the Apfel model were 0.500, 0.600, and 0.526, respectively (Table ). In the test set, DCA results showed that the nomogram model had a higher benefit when the threshold probability was 14.1–62.1% and the Apfel model had a higher benefit when the threshold probability was 31.5–37% (Fig. ). A total of 116 adult patients who underwent Cesarean section procedures met the inclusion criteria. Of these, 87 formed the training group, and 24 (27.59%) of these experienced PONV following their procedures. The validation group comprised 29 patients, with 8 (27.59%) who had developed PONV. The test set included another 29 participants, of whom 12 (41.38%) had experienced PONV (Table ). The difference between the three groups was not statistically significant ( P > 0.05). The results of the univariate analysis showed that a history of motion sickness, gastric volume, and a systolic blood pressure fall > 20% were risk factors for PONV ( P < 0.05, Table ). These three variables were subsequently included in the logistic multivariate regression model, revealing that a history of motion sickness, gastric volume, and a systolic blood pressure fall > 20% were independent risk factors for PONV ( P < 0.05; Table ). Based on the logistic analysis results, data from the three aforementioned predictors were input into the R 4.2.2 software to construct a nomogram model for predicting PONV following a caesarean section (Fig. ). Predicting Performance The model was divided into training and internal verification sets using a 3:1 random split of the data. The AUC of the training and validation nomogram models were 0.814 (95% CI = 0.709–0.918) and 0.792 (95% CI = 0.621–0.962), respectively, indicating excellent prediction performance (Fig. ). The Hosmer and Lemeshow goodness-of-fit test coefficient of the nomogram model was 0.076. The best diagnostic probability, sensitivity, and specificity of the training set were 0.333, 0.792, and 0.841, respectively. The optimal diagnostic probability, sensitivity, and specificity in the validation set were 0.099, 0.875, and 0.667, respectively (Table ). Calibration degree In this study, calibration curves were used to evaluate the degree of fit between the predicted probability and the actual probability of PONV occurrence in the training and validation sets, using the nomogram (Fig. ). Its calibration curve showed that the predicted and actual probabilities were in good agreement between the training and verification sets. Clinical decision curve analysis DCA curve analysis showed that when the threshold probabilities of the training set and the validation set were 8.9-61.4% and 12.1-67.2%, respectively, the prediction of PONV occurrence probability after cesarean section based on the nomogram had more clinical benefits than that of all patients receiving targeted interventions or none receiving targeted interventions, indicating its strong clinical practicability (Fig. ). Clinical impact curve analysis The clinical impact curve was used to assess the ability of the nomogram model to identify patients at high risk of developing PONV (Fig. ). The model was divided into training and internal verification sets using a 3:1 random split of the data. The AUC of the training and validation nomogram models were 0.814 (95% CI = 0.709–0.918) and 0.792 (95% CI = 0.621–0.962), respectively, indicating excellent prediction performance (Fig. ). The Hosmer and Lemeshow goodness-of-fit test coefficient of the nomogram model was 0.076. The best diagnostic probability, sensitivity, and specificity of the training set were 0.333, 0.792, and 0.841, respectively. The optimal diagnostic probability, sensitivity, and specificity in the validation set were 0.099, 0.875, and 0.667, respectively (Table ). In this study, calibration curves were used to evaluate the degree of fit between the predicted probability and the actual probability of PONV occurrence in the training and validation sets, using the nomogram (Fig. ). Its calibration curve showed that the predicted and actual probabilities were in good agreement between the training and verification sets. DCA curve analysis showed that when the threshold probabilities of the training set and the validation set were 8.9-61.4% and 12.1-67.2%, respectively, the prediction of PONV occurrence probability after cesarean section based on the nomogram had more clinical benefits than that of all patients receiving targeted interventions or none receiving targeted interventions, indicating its strong clinical practicability (Fig. ). The clinical impact curve was used to assess the ability of the nomogram model to identify patients at high risk of developing PONV (Fig. ). We used clinical data from 29 adult patients who underwent Cesarean section procedures at another hospital as a test set. The test set was used to externally validate both the nomogram and Apfel models, as well as compare their levels of diagnostic efficacy. In the test set, the AUCs of the nomogram and Apfel models were 0.779 (95% CI = 0.593–0.965) and 0.547 (95% CI = 0.350–0.745), respectively (Fig. ). Delong’s test showed that the AUC of the nomogram model was significantly higher than that of the Apfel one (Z = 2.165, P < 0.05). In the test set, the highest diagnostic probability, sensitivity, and specificity measurement for the nomogram model were 0.449, 0.700, and 0.842, respectively, while the corresponding scores for the Apfel model were 0.500, 0.600, and 0.526, respectively (Table ). In the test set, DCA results showed that the nomogram model had a higher benefit when the threshold probability was 14.1–62.1% and the Apfel model had a higher benefit when the threshold probability was 31.5–37% (Fig. ). PONV is a common adverse reaction following surgery, including caesarean Sect. . PONV not only increases the risk of other postoperative complications but also often causes discomfort to patients with nausea and vomiting, leading to reduced patient satisfaction and extended discharge time, imposing a substantial burden on patients . In non-obstetric surgery, patient susceptibility to PONV is evaluated using the Apfel simplified risk score . In the Apfel score, patient and anesthesia factors contribute most to the risk of vomiting. For instance, the ORs for postoperative opioid use in women were reported as 4.78 and 2.44, respectively . However, in patients undergoing spinal anesthesia, these two factors generally exist by default, potentially limiting their predictive value for caesarean sections. In contrast, some maternal physiological and caesarean section-related factors may be associated with PONV risk; however, their predictive performance has not been integrated into the risk score prediction model. In addition, when evaluating the efficacy of the Apfel score, it was found that the AUC ROC of obstetric patients was 0.59, while that of non-obstetric patients was 0.753, further indicating the limited predictive performance of the Apfel score in the obstetric population . To establish a specific risk prediction model for caesarean section, this study collected information on potential perioperative risk factors such as patients’ basic conditions, surgical factors, anesthesia factors, and gastric ultrasound. In this study, a total of 32 patients developed PONV, accounting for 27.59% of the total sample size. Analysis of the collected data revealed that a history of motion sickness, gastric volume, and systolic blood pressure fall > 20% were independent risk factors for nausea and vomiting after caesarean section. Based on these three factors, a predictive model for nausea and vomiting after the caesarean section was established using R software and visualized as a nomogram. The model underwent verification and evaluation using the area under the ROC curve, calibration curve, DCA and CIC analysis. The model demonstrates good predictive performance and clinical application value. Among the included indicators, a history of motion sickness has been widely confirmed to possess a high predictive value for PONV. Apfel and Koivaranta included it in their studies, constructing PONV prediction models that have found widespread use in clinical practice . The results highlighted that a history of motion sickness stands as an independent risk factor for PONV during caesarean section, with the incidence of PONV being 5.08 times higher in patients with this history than in those without . Lee et al. further demonstrated the efficacy of prophylactic dexamethasone administration in reducing the incidence of PONV in patients with a history of motion sickness . Horn et al. found that after spinal anesthesia, hypotension caused by vasodilation, whether postural hypotension or hypotension caused by other factors, can stimulate the receptors of the central nervous system to release emetic neurochemical transmitters, resulting in nausea and vomiting . This is consistent with the results of the present study, illustrating that intraoperative hypotension is an independent risk factor for PONV during caesarean Sect. . The use of gastric ultrasound has become increasingly prevalent in obstetric anesthesia in recent years . Hong et al. observed that pregnant women had larger gastric volumes than non-pregnant women, while Cozza et al. found that increased gastric volume corresponds to a higher incidence of PONV . Hamed et al. showed that preoperative metoclopramide administration could effectively reduce the incidence of PONV by decreasing gastric volume, further substantiating these findings . The results of this study also suggest that increased stomach volume is an independent risk factor for PONV in caesarean sections. The nomogram and Apfel scoring models were externally verified using the test dataset, and an ROC curve was drawn to compare their levels of diagnostic efficacy for predicting PONV following Cesarean section procedures. The AUCs of the two models were 0.779 and 0.547, respectively. DCA curves for the two models show that the nomogram had a wider threshold probability range and higher clinical net return rate, further confirming its superiority for predicting PONV following Cesarean section procedures. The nomogram prediction model constructed in this study effectively foresees the risk of PONV in caesarean sections. Internal verification shows that the model exhibits good discrimination, consistency, and clinical utility. In the era of individualized precision medicine, which is gaining increasing attention, the ability to promptly identify caesarean section patients at risk of PONV and implement targeted preventive measures, such as preoperative antiemetic drugs administration, intraoperative management of blood pressure stability, adjustment of opioid dosage, auxiliary support, and other treatment measures, is invaluable. These measures can significantly enhance postoperative recovery speed and patient satisfaction. However, this study has some limitations. First, this was conducted at a single center with a relatively small sample size, potentially introducing selection bias. As such, further verification using larger, multicenter datasets is warranted to validate the study’s results. Second, the scope of relevant factors considered in this study was limited, potentially overlooking certain risk factors associated with PONV. Follow-up studies can further expand patient data based on this study, identifying screen indicators with stronger correlations to PONV to establish a more precise prediction model. In this study, a nomogram model for predicting PONV was constructed by combining three independent risk factors identified through multivariate regression analysis. This model serves to optimize the preoperative evaluation system for caesarean sections, formulate individualized perioperative management strategies, accelerate recovery, and improve prognosis. Using a multivariate logistic regression analysis, we found that a previous history of motion sickness, intraoperative hypotension, and stomach volume were independent risk factors for the development of PONV following Cesarean section procedures. We also confirmed that our nomogram model constructed using these three independent risk factors had higher levels of predictive performance and clinical practicability, and was significantly more accurate than a similarly-constructed Apfel scoring model for this application. This model may help to further optimize preoperative evaluation systems for Cesarean section procedures, develop individualized perioperative management strategies, achieve more rapid recoveries for patients undergoing Cesarean sections, and improve their prognoses.
Validation of the Arabic eHealth literacy questionnaire: a factor and Rasch analysis study
b1eba2f3-ff1e-457d-8bea-13a72a481dec
11842305
Health Literacy[mh]
Introduction Health literacy involves understanding and interpreting health information to make health-related decisions . Education, health awareness, and the skills needed to read and interpret medicine bottles, appointment slips, transit cards, and doctor’s orders are all part of health literacy. It equips individuals with the ability to navigate the complex world of health care and disease management . Health literacy is essential for accessing and using healthcare. It enables individuals to make informed choices by understanding health issues, diseases, and treatment options . Healthcare systems face the challenge of managing an increasing volume of healthcare-related information and clinical records . Simultaneously, evolving information technology offers solutions by enabling the management of vast amounts of information through computerized storage of health records . The advent of computer technologies has prompted healthcare officials to prioritize their integration within the healthcare system. This initiative has proven effective in sectors such as laboratories and pharmacies . Norman and Skinner proposed the concept of e-health literacy, which refers to the capacity to effectively access, locate, and evaluate health-related information from electronic sources in order to address health-related issues. E-health literacy encompasses six fundamental competencies: traditional literacy, which includes reading, understanding, communicating, and writing; health literacy, focused on accessing, comprehending, evaluating, and applying health-related information; information literacy, which entails the effective access and use of information; media literacy, involving the ability to select, understand, evaluate, and create media-based messages; scientific literacy, which uses scientific methods to understand, evaluate, and explain health situations; and computer literacy, particularly in troubleshooting computer issues. Successful use of eHealth resources requires individuals to possess digital health literacy skills. Low health literacy has been linked to poorer health outcomes and increased healthcare disparities, which makes it an essential area of focus in public health interventions . With the increasing reliance on digital health platforms, individuals without adequate eHealth literacy may face additional barriers to accessing healthcare and using digital resources effectively . Assessing eHealth literacy is essential for understanding the use of eHealth platforms and resources. One of the first tools that was developed to assess health literacy was the eHealth Literacy Scale (eHEALS) , which remains a widely used tool that was applied to evaluate eHealth literacy and its impact on different health outcomes in different settings and populations . However, the connections between these studies’ findings and specific eHealth recommendations were generally unclear . Furthermore, as noted by eHEALS author , the digital environment had substantially evolved since 2006, especially in terms of interactivity and information and communication technologies (ICT) capabilities and suggested revising the eHealth Literacy concept and eHEALS. Moreover, the studies that evaluated the construct of the eHEALS produced inconsistent results related to the number of factors and the distribution of the items between the different factors . To overcome these drawbacks the eHealth Literacy Questionnaire (eHLQ) was developed. The tool is a 35-item measure comprising seven domains of eHealth literacy. It measures various domains of eHealth literacy such as using technology to process health information, understanding electronic health information, finding reliable electronic health information, engaging with digital health services, feeling secure when using eHealth resources, and being motivated to engage with digital health. While this questionnaire has been validated in English, Danish, and Norwegian populations, no equivalent tool currently exists for Arabic-speaking populations, despite the significant number of Arabic speakers globally and their growing engagement with digital health platforms . The present study aimed to validate, assess for trustworthiness, and test the stability of the Arabic eHLQ among Jordanian adults. Jordan, with its high internet penetration rate and increasing reliance on digital health platforms , provided an ideal setting for this study. By evaluating Arabic speakers’ electronic health literacy with a validated and culturally appropriate questionnaire, the study aimed to aid healthcare providers and researchers in designing targeted interventions to improve health literacy and health outcomes. Materials and methods This cross-sectional study collected data using both paper-based and electronic questionnaires. Data was collected at a single time point from participants. Ethical approval was obtained from Al-Zaytoonah University of Jordan (Ref no:03/2023-2024), and the study adhered to the Declaration of Helsinki’s ethical principles for medical research involving human subjects. 2.1 Sample and sampling method This study employed a convenience sampling method. A participant-to-item ratio of 10:1 was used to obtain the required sample size for factor analysis . The study targeted all citizens residing in Jordan as its population. To ensure geographical representation, participants were recruited from various central regions across the country. Inclusion criteria required participants to be Jordanian citizens aged 18 years or older, and literate in reading and writing. Exclusion criteria stipulated that individuals under the age of 18 years and those residing outside Jordan were ineligible to participate in the study. The recruiters were instructed to approach individuals from different age groups and from different sociodemographic statuses. A total of 657 participants completed the questionnaire. Paper and electronic data collection method was adopted, with approximately one-third of surveys distributed in paper format and the remainder electronically. Distribution channels included community centers, healthcare facilities, and online platforms. Both the online and paper questionnaires included an introductory paragraph that clarified the study’s aim, participants’ rights and roles, inclusion and exclusion criteria, and the consent form. 2.2 Instrument The survey instrument, consisting of 45 questions, gathered both demographic details and health literacy insights. The demographic data contained ten items to collect data about key sociodemographic aspects, including age, gender, monthly income, presence of chronic illnesses, self-assessed health status, education level, involvement in the medical field (study or work), preferred methods for accessing medical information online, and duration of daily online activity. The second part of the survey consisted of the Arabic version of eHLQ , developed in accordance with the eHealth Literacy Framework (eHLF) proposed by Norgaard et al. . This self-report measure is composed of seven dimensions with a total of 35 items: five items for each of the first five dimensions, six items for the sixth dimension, and four items for the seventh dimension. The scale uses an ordinal response format, with responses graded on a Likert scale ranging from 0 (strongly disagree) to 3 (strongly agree). It comprises seven dimensions: 1- using technology to process health information, 2- understanding health concepts and language, 3- ability to actively engage with digital services, 4- feeling safe and in control, 5- motivated to engage, 6- access to working digital services, and 7- digital services that meet individual needs . 2.3 Tool validation The eHLQ was chosen by an expert panel composed of two clinical pharmacists and one public health specialist. The selection was based on the questionnaire’s comprehensive coverage of various e-health literacy domains and its use of simplified language. The content validity of the questionnaire was confirmed by the expert panel and by members of the general population. The survey was translated into Modern Standard Arabic following the Brislin principle to ensure that the Arabic version retained the original meaning of the questionnaire with cultural relevance. The forward-backward translation process was conducted by separate independent translators. The translated versions were compared, and a final Arabic version was produced. Thirty participants were recruited for a pilot study to assess the questionnaire’s face validity. Participants were randomly selected and briefed on the study’s purpose. They were asked to complete the questionnaire and participate in an open discussion to provide feedback. Specifically, they were asked to evaluate the relevance, clarity, content and simplicity of the items. Ultimately, the participants confirmed the adequacy of the eHLQ as all the participants found it easy to comprehend and complete, with no further modifications being necessary. With the advancement of psychometric methods adopted to validate different health tools, the selection of the most adequate method may be demanding . The present paper applied Rasch model theory and classical test theory (CTT) to increase the validity and reliability of the study results. 2.4 Data analysis The Statistical Package for the Social Sciences software (SPSS) version 23 and RStudio Software with packages TAM version 4.2–21 and lavaan version 0.6–17 were used for data analysis. All continuous variables were expressed as medians and interquartile ranges. For categorical variables, frequencies and percentages were reported. The internal consistency of each scale was evaluated by computing Cronbach’s alpha and McDonald’s ωt with acceptable values >0.7 . Confirmatory Factor Analysis (CFA) for ordinal data Diagonally Weighted Least Squares (DWLS) estimator was conducted to verify the fitness of the original 7-factor model, and scaled model fit indices, including Comparative Fit Index (CFI), Tucker-Lewis Index (TLI), Root Mean Square Error of Approximation (RSMEA), Standardized Root Mean Square Residual (SRMR), chi-square with degrees of freedom and p -value, and Minimum Discrepancy of Confirmatory Factor Analysis/Degrees of Freedom (CMIN/DF), were computed. The acceptable values were as follows: for CMIN/DF < 5 , for RMSEA <0.08 , for SRMR ≤0.08 and for TLI and CFI values closer to 1 . Multidimensional Rasch analysis was performed, and item thresholds were examined. Disordered thresholds may indicate irregularity and may arise when respondents fail to select the appropriate response options, which can result from unclear labeling or an excessive number of answer options. Model fit was assessed by computing item/person separation reliability. Item infit and outfit mean square values (MNSQs) were assessed, with acceptable range set between 0.5 and 1.5 . Additionally, differential item functioning (DIF) was evaluated to examine potential biases resulting from gender differences with acceptable logit differences of ≥0.43 . Moreover, ceiling or floor effects were evaluated by computing the frequency of participants who scored the maximum and minimum possible scores. Sample and sampling method This study employed a convenience sampling method. A participant-to-item ratio of 10:1 was used to obtain the required sample size for factor analysis . The study targeted all citizens residing in Jordan as its population. To ensure geographical representation, participants were recruited from various central regions across the country. Inclusion criteria required participants to be Jordanian citizens aged 18 years or older, and literate in reading and writing. Exclusion criteria stipulated that individuals under the age of 18 years and those residing outside Jordan were ineligible to participate in the study. The recruiters were instructed to approach individuals from different age groups and from different sociodemographic statuses. A total of 657 participants completed the questionnaire. Paper and electronic data collection method was adopted, with approximately one-third of surveys distributed in paper format and the remainder electronically. Distribution channels included community centers, healthcare facilities, and online platforms. Both the online and paper questionnaires included an introductory paragraph that clarified the study’s aim, participants’ rights and roles, inclusion and exclusion criteria, and the consent form. Instrument The survey instrument, consisting of 45 questions, gathered both demographic details and health literacy insights. The demographic data contained ten items to collect data about key sociodemographic aspects, including age, gender, monthly income, presence of chronic illnesses, self-assessed health status, education level, involvement in the medical field (study or work), preferred methods for accessing medical information online, and duration of daily online activity. The second part of the survey consisted of the Arabic version of eHLQ , developed in accordance with the eHealth Literacy Framework (eHLF) proposed by Norgaard et al. . This self-report measure is composed of seven dimensions with a total of 35 items: five items for each of the first five dimensions, six items for the sixth dimension, and four items for the seventh dimension. The scale uses an ordinal response format, with responses graded on a Likert scale ranging from 0 (strongly disagree) to 3 (strongly agree). It comprises seven dimensions: 1- using technology to process health information, 2- understanding health concepts and language, 3- ability to actively engage with digital services, 4- feeling safe and in control, 5- motivated to engage, 6- access to working digital services, and 7- digital services that meet individual needs . Tool validation The eHLQ was chosen by an expert panel composed of two clinical pharmacists and one public health specialist. The selection was based on the questionnaire’s comprehensive coverage of various e-health literacy domains and its use of simplified language. The content validity of the questionnaire was confirmed by the expert panel and by members of the general population. The survey was translated into Modern Standard Arabic following the Brislin principle to ensure that the Arabic version retained the original meaning of the questionnaire with cultural relevance. The forward-backward translation process was conducted by separate independent translators. The translated versions were compared, and a final Arabic version was produced. Thirty participants were recruited for a pilot study to assess the questionnaire’s face validity. Participants were randomly selected and briefed on the study’s purpose. They were asked to complete the questionnaire and participate in an open discussion to provide feedback. Specifically, they were asked to evaluate the relevance, clarity, content and simplicity of the items. Ultimately, the participants confirmed the adequacy of the eHLQ as all the participants found it easy to comprehend and complete, with no further modifications being necessary. With the advancement of psychometric methods adopted to validate different health tools, the selection of the most adequate method may be demanding . The present paper applied Rasch model theory and classical test theory (CTT) to increase the validity and reliability of the study results. Data analysis The Statistical Package for the Social Sciences software (SPSS) version 23 and RStudio Software with packages TAM version 4.2–21 and lavaan version 0.6–17 were used for data analysis. All continuous variables were expressed as medians and interquartile ranges. For categorical variables, frequencies and percentages were reported. The internal consistency of each scale was evaluated by computing Cronbach’s alpha and McDonald’s ωt with acceptable values >0.7 . Confirmatory Factor Analysis (CFA) for ordinal data Diagonally Weighted Least Squares (DWLS) estimator was conducted to verify the fitness of the original 7-factor model, and scaled model fit indices, including Comparative Fit Index (CFI), Tucker-Lewis Index (TLI), Root Mean Square Error of Approximation (RSMEA), Standardized Root Mean Square Residual (SRMR), chi-square with degrees of freedom and p -value, and Minimum Discrepancy of Confirmatory Factor Analysis/Degrees of Freedom (CMIN/DF), were computed. The acceptable values were as follows: for CMIN/DF < 5 , for RMSEA <0.08 , for SRMR ≤0.08 and for TLI and CFI values closer to 1 . Multidimensional Rasch analysis was performed, and item thresholds were examined. Disordered thresholds may indicate irregularity and may arise when respondents fail to select the appropriate response options, which can result from unclear labeling or an excessive number of answer options. Model fit was assessed by computing item/person separation reliability. Item infit and outfit mean square values (MNSQs) were assessed, with acceptable range set between 0.5 and 1.5 . Additionally, differential item functioning (DIF) was evaluated to examine potential biases resulting from gender differences with acceptable logit differences of ≥0.43 . Moreover, ceiling or floor effects were evaluated by computing the frequency of participants who scored the maximum and minimum possible scores. Results presents the sociodemographic profile of the study participants. A total of 657 individuals took part. The median age was 26 years, with ages ranging from 22 to 31 years. Regarding gender distribution, 58.9% of participants identified as female and 41.1% as male. In terms of marital status, the majority (62.3%) reported being single, with the remaining 37.7% being married. Regarding income status, 51% reported earning less than 500 Jordanian Dinars (JOD) per month, 35% reported a monthly income between 500 and 1,000 JOD, and 14% earned more than 1,000 JOD per month. Concerning health, 11.4% of participants reported having a chronic disease, while 88.6% indicated no chronic conditions. Additionally, 42.8% of participants reported that at least one family member had a chronic disease, compared to 57.2% who reported no family history of chronic illness. The participants’ perceptions of their current health status are depicted in . The majority of participants rated their health status as 4 out of 5 (46.9%), followed by a rating of 5 out of 5 (27.1%). Devices that participants used to access medical information online are presented in . Most participants used smartphones (85.23%), followed by laptops (26.63%), with tablets being the least used (8.37%). Additionally, 7.30% of participants reported that they did not use the Internet to access medical information. The evaluation of participants’ eHealth Literacy Questionnaire responses is summarized in . Most participants agreed or strongly agreed with Q6 (94%), followed by Q4 (92.5%). Conversely, most disagreed or strongly disagreed with Q3 (49%), followed by Q20 (46.6%). All items had a median score of 2. 3.1 Scale 1: using technology to process health information Most participants indicated agreement or strong agreement across items, with Q7, Q11, and Q13 showing high levels of agreement (median = 2). However, items Q20 and Q25 had notable proportions of disagreement, suggesting variability in ease of use. 3.2 Scale 2: understanding of health concepts and language Participants generally agreed with items assessing health concept comprehension, as seen in Q5 and Q12 (median = 2), although some disagreement was observed for Q15 (24.5%). 3.3 Scale 3: ability to actively engage with digital services High levels of agreement were reported, particularly for items like Q4 and Q6 (median = 2). Q8 showed a slightly higher disagreement rate (30.6%), which indicated some variability in engagement levels. 3.4 Scale 4: feeling safe and in control Most participants felt safe and in control when using digital health services, with items Q1 and Q10 reflecting strong agreement (median = 2). However, Q14 and Q22 had higher disagreement levels, which suggested some concerns around safety. 3.5 Scale 5: motivated to engage with digital services The majority agreed with items assessing motivation, such as Q2 and Q19 (median = 2), although Q24 showed more disagreement (34.2%), which indicated mixed motivation levels. 3.6 Scale 6: access to digital services that work Responses varied, with some participants finding digital services functional and accessible, while others faced usability issues, as reflected in Q3 and Q16 (median = 2). 3.7 Scale 7: digital services that suit individual needs There was variability in whether services met individual needs. For example, Q18 and Q28 showed high agreement, though Q28 and Q31 had disagreement proportions around 27–30%. Overall, none of the participants recorded the minimum score of 35, and only 1% achieved the maximum score of 140, indicating the absence of ceiling and floor effects. All items had a median score of 2, showing a general tendency toward agreement. 3.8 Tool validation CFA was conducted to evaluate the 7-factor model suggested in the original questionnaire. However, the covariance matrix of latent variables was not positive definite due to excessive correlation between factors 6 and 7, which prevented model convergence. Consequently, these two factors were combined into a single factor, and a new analysis was conducted using a 6-factor model. Cronbach’s alphas and McDonald’s ωt of the 6 factors are presented in . Across all scales, the reliability coefficients were notably high, affirming the internal consistency of the eHLQ and supporting its validity for assessing various aspects of engagement with digital health services. Specifically, all scales exhibited reliability coefficients above 0.7, ranging between 0.71 for Factor 3 to 0.84 for factor 6, confirming their adequacy for research use. The chi-square test for the 6-factor model was significant (chi-square = 2.032, df = 545, p < 0.001) indicating inadequacy of the model. However, this was expected due to the large sample size; therefore, the method of dividing chi-square by degree of freedom was applied and yielded acceptable CMIN/DF = 3.72. Other scaled model fit indices were also acceptable including RMSEA = 0.064 (90Cl: 0.062–0.068), SRMR = 0.063, CFI = 0.93, and TLI = 0.92. Standardized factor loadings ranged from 0.42 to 0.81. The highest factor loadings were observed for Q14 and Q22 in Factor 4, while the lowest were recorded for Q3 in Factor 6 and only two items were < 0.5. Detailed standardized factor loadings for each item are provided in . 3.9 Rasch model A six-dimensional model was analyzed. The person reliability index for the 6 dimensions ranged between 0.68 for dimension 3 to 0.86 for dimension 6, and the item separation reliability ranged between 0.85 for dimension 3 to 0.91 for dimension 6. displays the infit and Outfit mean square values, affirming the eHLQ’s ability to differentiate between various participant levels and confirming the model’s item hierarchy. Notably, the only item slightly exceeding the acceptable range was Q3, with an Outfit MNSQ of 1.58 and an infit MNSQ of 1.52. All questions were presented with ordered response categories. Analysis revealed that Q3 was the most challenging item for participants to answer, followed by Q20, whereas Q6 was the most straightforward. Furthermore, the easiest threshold to respond to was the first threshold of Q7, followed by Q6. Conversely, the third threshold of Q23 was identified as the most challenging. Scale 6 recorded the highest outfit and infit values for Q3, at 1.58 and 1.52, respectively. Following this, Scale 4 for Q1 showed values of 1.36 for outfit and 1.26 for infit. The lowest values were observed on Scale 6 for Q28, with outfit and infit values of 0.83 and 0.84, respectively. DIF between genders was evaluated and the analysis revealed that the difference between the two genders on the logit scale was 0.16 logits, indicating no significant differences in DIFs between the two sexes as the recommended cutoff point is ≥0.43. displays the Wright map, showing that participants were distributed across all difficulty levels in all 6 domains, with the majority concentrated in the middle range. The item thresholds revealed a range of item difficulties, all with well-ordered thresholds. Scale 1: using technology to process health information Most participants indicated agreement or strong agreement across items, with Q7, Q11, and Q13 showing high levels of agreement (median = 2). However, items Q20 and Q25 had notable proportions of disagreement, suggesting variability in ease of use. Scale 2: understanding of health concepts and language Participants generally agreed with items assessing health concept comprehension, as seen in Q5 and Q12 (median = 2), although some disagreement was observed for Q15 (24.5%). Scale 3: ability to actively engage with digital services High levels of agreement were reported, particularly for items like Q4 and Q6 (median = 2). Q8 showed a slightly higher disagreement rate (30.6%), which indicated some variability in engagement levels. Scale 4: feeling safe and in control Most participants felt safe and in control when using digital health services, with items Q1 and Q10 reflecting strong agreement (median = 2). However, Q14 and Q22 had higher disagreement levels, which suggested some concerns around safety. Scale 5: motivated to engage with digital services The majority agreed with items assessing motivation, such as Q2 and Q19 (median = 2), although Q24 showed more disagreement (34.2%), which indicated mixed motivation levels. Scale 6: access to digital services that work Responses varied, with some participants finding digital services functional and accessible, while others faced usability issues, as reflected in Q3 and Q16 (median = 2). Scale 7: digital services that suit individual needs There was variability in whether services met individual needs. For example, Q18 and Q28 showed high agreement, though Q28 and Q31 had disagreement proportions around 27–30%. Overall, none of the participants recorded the minimum score of 35, and only 1% achieved the maximum score of 140, indicating the absence of ceiling and floor effects. All items had a median score of 2, showing a general tendency toward agreement. Tool validation CFA was conducted to evaluate the 7-factor model suggested in the original questionnaire. However, the covariance matrix of latent variables was not positive definite due to excessive correlation between factors 6 and 7, which prevented model convergence. Consequently, these two factors were combined into a single factor, and a new analysis was conducted using a 6-factor model. Cronbach’s alphas and McDonald’s ωt of the 6 factors are presented in . Across all scales, the reliability coefficients were notably high, affirming the internal consistency of the eHLQ and supporting its validity for assessing various aspects of engagement with digital health services. Specifically, all scales exhibited reliability coefficients above 0.7, ranging between 0.71 for Factor 3 to 0.84 for factor 6, confirming their adequacy for research use. The chi-square test for the 6-factor model was significant (chi-square = 2.032, df = 545, p < 0.001) indicating inadequacy of the model. However, this was expected due to the large sample size; therefore, the method of dividing chi-square by degree of freedom was applied and yielded acceptable CMIN/DF = 3.72. Other scaled model fit indices were also acceptable including RMSEA = 0.064 (90Cl: 0.062–0.068), SRMR = 0.063, CFI = 0.93, and TLI = 0.92. Standardized factor loadings ranged from 0.42 to 0.81. The highest factor loadings were observed for Q14 and Q22 in Factor 4, while the lowest were recorded for Q3 in Factor 6 and only two items were < 0.5. Detailed standardized factor loadings for each item are provided in . Rasch model A six-dimensional model was analyzed. The person reliability index for the 6 dimensions ranged between 0.68 for dimension 3 to 0.86 for dimension 6, and the item separation reliability ranged between 0.85 for dimension 3 to 0.91 for dimension 6. displays the infit and Outfit mean square values, affirming the eHLQ’s ability to differentiate between various participant levels and confirming the model’s item hierarchy. Notably, the only item slightly exceeding the acceptable range was Q3, with an Outfit MNSQ of 1.58 and an infit MNSQ of 1.52. All questions were presented with ordered response categories. Analysis revealed that Q3 was the most challenging item for participants to answer, followed by Q20, whereas Q6 was the most straightforward. Furthermore, the easiest threshold to respond to was the first threshold of Q7, followed by Q6. Conversely, the third threshold of Q23 was identified as the most challenging. Scale 6 recorded the highest outfit and infit values for Q3, at 1.58 and 1.52, respectively. Following this, Scale 4 for Q1 showed values of 1.36 for outfit and 1.26 for infit. The lowest values were observed on Scale 6 for Q28, with outfit and infit values of 0.83 and 0.84, respectively. DIF between genders was evaluated and the analysis revealed that the difference between the two genders on the logit scale was 0.16 logits, indicating no significant differences in DIFs between the two sexes as the recommended cutoff point is ≥0.43. displays the Wright map, showing that participants were distributed across all difficulty levels in all 6 domains, with the majority concentrated in the middle range. The item thresholds revealed a range of item difficulties, all with well-ordered thresholds. Discussion The current study focused on translating and validating the 35-item eHLQ into Arabic, aiming to facilitate its cross-cultural adaptation for researchers in Arabic-speaking healthcare contexts. The Arabic version of the eHLQ showed strong internal consistency, with acceptable alpha coefficients in all six domains. The results of this study are consistent with those from eHLQ validation studies conducted in other languages confirming the robustness of the questionnaire across different cultural and linguistic settings. In comparison, the reliability of the Arabic version of the eHLQ is on par with, or slightly exceeds, that of its counterparts in other languages. This finding reinforces its utility and robustness in Arabic-speaking healthcare contexts. Our CFA supported a 6-factor model, which has a different number of factors to the original 7-factor model . This adjustment was due to a high correlation between the factors “Access to digital services” and “Digital services that suit individual needs,” which resulted in their combination into a single factor. The merging of these two factors can be theoretically justified, as both assess access to digital services with similar items, such as “eHealth systems provide me with easy ways to get what I need” from the original factor “Digital services that suit individual needs” and “I have access to health technology that works” from the original factor “Access to digital services.” Nevertheless, certain items, such as items 1 and 3, exhibited relatively lower factor loadings. These discrepancies suggest potential challenges in translation or variations in respondents’ perceptions, possibly influenced by demographic characteristics. These findings highlight the importance of further examining these discrepancies to improve the questionnaire’s validity and applicability in diverse cultural settings. Such an investigation could help identify specific cultural or linguistic factors that influence response patterns, thus enhancing the tool’s effectiveness for global health assessments. Addressing these challenges in future iterations may help to refine the tool further and enhance its cross-cultural applicability. Utilizing Rasch analysis to assess the proposed model fit, most items demonstrated satisfactory fit within acceptable thresholds, except for Item 3 which exhibited a slightly high infit/outfit value, indicating potential under-discrimination. It is important to note that outfit statistics are particularly sensitive to responses to items whose difficulty differs significantly from an individual’s ability level while infit statistics is more sensitive to the pattern of responses to items targeted on the person . This study revealed a pronounced ease of interaction with digital tasks. The thresholds were consistently reasonably low, reflecting ease of engagement with digital health information and motivation to use digital services. Overall, while previous studies identified basic digital tasks as relatively easy, our study highlights a greater comfort and proficiency in using digital health platforms among respondents. Conversely, items Q20, Q5, Q4, Q1, Q35, Q23, and Q31, which were distributed across all the questionnaire domains, were identified as the most challenging, indicating that individuals with physical ailments may struggle to access information on managing mental health issues. These observed variations in participants’ health literacy levels are consistent with the understanding that health literacy encompasses not just knowledge, but also the ability to perform responsibilities and tasks related to health and healthcare effectively. This suggests that health literacy involves a dynamic set of skills that vary widely among individuals, impacting their ability to manage their health and interact with healthcare systems. Consistent with previous validations of the eHLQ in other languages, the Arabic version exhibited strong reliability and validity. The overall internal consistency (Cronbach’s α = 0.71–0.86) is comparable to or slightly higher than the reliability coefficients reported in Danish ( α = 0.75–0.87) and Norwegian populations ( α = 0.73–0.90) . Similar to these studies, the Arabic eHLQ demonstrated strong performance across multiple domains, particularly in assessing access to digital services and users’ motivation to engage with digital platforms. The merging of Factors 6 and 7 in this study aligns with findings from a prior validation, which reported high correlations between these domains . Additionally, certain items (e.g., Q3 and Q20) exhibited higher variability in response patterns, consistent with earlier research highlighting these items as universally challenging across diverse populations. These results reinforce the robustness of the eHLQ framework while emphasizing the importance of contextual adaptations to address subtle cultural differences in item interpretation. The findings of the present study underscore the robust validity and reliability of the Arabic version of the eHLQ developed in this research, and evidence the proficiency of Jordanian participants in utilizing digital health services, consistent with trends in other cultural contexts. They also have practical implications for healthcare providers, policymakers, and researchers. The validated Arabic eHLQ offers a reliable tool to assess eHealth literacy, enabling the design of targeted interventions to improve digital health engagement in Arabic-speaking populations. For instance, the tool can be used to identify individuals or groups who may benefit from tailored educational programs or digital skill training, thereby promoting equitable access to healthcare resources. 4.1 Strengths, limitations, and future directions One of the primary strengths of this study is the large sample size, which contributes significantly to the reliability and validity of our findings. A larger sample size helps mitigate random fluctuations and reduce sampling errors, thereby increasing confidence in the study’s results. Such reliability is essential for producing dependable research outcomes that can inform further studies and practical applications. Furthermore, the substantial sample size facilitated detailed subgroup analyses, which enabled the exploration of nuances and variations among demographic and other relevant factors. This deeper exploration allowed for more insightful conclusions and the identification of important insights that might otherwise be missed. However, despite these strengths, the study is not without its limitations. Convenience sampling techniques were applied which may be susceptible to selection bias, as certain individuals might be more inclined to participate than others, potentially skewing the results. Nevertheless, the young sample mimics the young Jordanian population, which has a median age of 22.4 years according to the Jordanian High Population Council . Moreover, like the present study sample, more than 41% of Jordanian population has a monthly household of less than 500 JDs . Furthermore, the study’s aim was to validate the tool within the general population. Additionally, with this being a study based on self-report, it may have been left susceptible to recall and social desirability biases, where participants may not have accurately remembered past events or may have responded in a manner they perceived as favorable rather than truthful. Although the present study went through a systemic translation process, semantic bias cannot be completely excluded. Future research could benefit from employing a more stratified sampling technique to minimize selection bias and ensure a more representative cross-section of the population, including targeting special populations such as the older adult, patients of different chronic diseases, low education levels and rural residents. Additionally, to mitigate the effects of recall and social desirability biases inherent in self-report studies, future investigations could implement mixed methods approaches that include qualitative interviews could also provide deeper insights into the motivations behind participants’ responses, offering a more nuanced understanding of their health literacy and digital engagement. Finally, although this study applied rigorous validation methodology it did not evaluate test-rest reliability which could be conducted in future research. Strengths, limitations, and future directions One of the primary strengths of this study is the large sample size, which contributes significantly to the reliability and validity of our findings. A larger sample size helps mitigate random fluctuations and reduce sampling errors, thereby increasing confidence in the study’s results. Such reliability is essential for producing dependable research outcomes that can inform further studies and practical applications. Furthermore, the substantial sample size facilitated detailed subgroup analyses, which enabled the exploration of nuances and variations among demographic and other relevant factors. This deeper exploration allowed for more insightful conclusions and the identification of important insights that might otherwise be missed. However, despite these strengths, the study is not without its limitations. Convenience sampling techniques were applied which may be susceptible to selection bias, as certain individuals might be more inclined to participate than others, potentially skewing the results. Nevertheless, the young sample mimics the young Jordanian population, which has a median age of 22.4 years according to the Jordanian High Population Council . Moreover, like the present study sample, more than 41% of Jordanian population has a monthly household of less than 500 JDs . Furthermore, the study’s aim was to validate the tool within the general population. Additionally, with this being a study based on self-report, it may have been left susceptible to recall and social desirability biases, where participants may not have accurately remembered past events or may have responded in a manner they perceived as favorable rather than truthful. Although the present study went through a systemic translation process, semantic bias cannot be completely excluded. Future research could benefit from employing a more stratified sampling technique to minimize selection bias and ensure a more representative cross-section of the population, including targeting special populations such as the older adult, patients of different chronic diseases, low education levels and rural residents. Additionally, to mitigate the effects of recall and social desirability biases inherent in self-report studies, future investigations could implement mixed methods approaches that include qualitative interviews could also provide deeper insights into the motivations behind participants’ responses, offering a more nuanced understanding of their health literacy and digital engagement. Finally, although this study applied rigorous validation methodology it did not evaluate test-rest reliability which could be conducted in future research. Conclusion This study provides a comprehensive development and evaluation of the Arabic version of the eHLQ, focusing on its accuracy, reliability, and applicability within Arabic-speaking populations. The findings demonstrate that the Arabic translation of the eHLQ maintains a high level of internal consistency, comparable to or surpassing the reliability of versions in other languages. CFA and Rasch analysis both supported the tool’s effectiveness, with the single-factor model fitting adequately and most items performing well within acceptable thresholds. With the six-factor model demonstrating adequate fit and most items performing well within acceptable thresholds. With this validation, the questionnaire can now be used to assess the e-health literacy of Arabic-speaking populations. This is crucial for enhancing health outcomes and facilitating greater patient involvement in digital healthcare settings. The study’s findings provide useful insights for healthcare policymakers and practitioners aiming to improve digital health practices in Jordan and other Arabic-speaking countries. By utilizing a culturally and linguistically customized eHLQ, health workers can gain a deeper understanding of patients’ e-health literacy. This understanding, in turn, facilitates the enhancement of digital health resource utilization among patients in these countries. Such targeted improvements in e-health literacy can promote more effective and inclusive digital healthcare services, ultimately fostering better health outcomes.
Molecular characterization of feline immune checkpoint molecules and establishment of PD-L1 immunohistochemistry for feline tumors
36d6d637-0be3-4a14-b756-3f63d3241404
9879432
Anatomy[mh]
Tumor treatment in humans has progressed dramatically in the past decade, partly due to the success of molecular-targeted therapies and immunotherapies. Among the emerging therapies, immune checkpoint inhibitors (ICIs), such as anti-programmed death 1 (PD-1) and anti-PD-ligand 1 (PD-L1) antibodies, have shown great promise in the treatment of various malignancies including melanoma, non-small-cell lung cancer (NSCLC), and renal-cell cancer . Cats develop spontaneous tumors typically at >8 years of age, and malignant tumors are diagnosed 4.6-fold more often than benign tumors . Feline tumors are treated with surgery, radiation, and chemotherapy; however, despite advances in veterinary care, neoplasia-related deaths account for 10.8% of the mortality in cats, ranking fourth following trauma (12.2%), renal disorder (12.1%), and non-specific illness (11.2%) . Therefore, the establishment of immunotherapies for feline tumors is urgently needed as an additional/alternative option to conventional therapies. The PD-1/PD-L1 pathway is an important suppressive mechanism for T cell-mediated immunity. PD-1 is an immunoinhibitory receptor expressed on T cells that attenuates antigen-specific responses, such as cell proliferation, cytokine secretion, and cytotoxicity . The expression of PD-L1, a PD-1 ligand, can be found in both hematopoietic and non-hematopoietic cell types, whereas the expression of PD-L2, the other ligand, is restricted to immune cells, such as dendritic cells and macrophages . PD-L1 overexpression is often reported in various human tumors and its expression is correlated with poor prognosis in cancer patients . The blockade of PD-1/PD-L1 using antibodies induces robust antitumor immune responses by reinvigorating T cell effector functions , which has promoted the standard of care for various malignancies in humans. In cats, limited information is available on the PD-1/PD-L1 pathway and its association with tumor disease is unclear. Folkl et al . reported the complete feline PD-1 and partial PD-L1 mRNA coding sequences (CDS) and found upregulation of PD-1/PD-L1 proteins in the lymphocytes of cats chronically infected with feline immunodeficiency virus. The complete CDS for feline PD-L2 mRNA has been registered in the NCBI GenBank database by the same authors (accession number: NM_001290244.1); however, the experimental procedures have not yet been reported, and characterization of its protein is yet to be performed. Recently, an immunohistochemical study using anti-human PD-L1 monoclonal antibody revealed that HER2-positive and triple-negative normal-like feline mammary carcinomas express PD-L1 , while PD-L1 expression in other feline tumors remains unknown. Thus, further genetic and functional characterization of feline PD-L1/PD-L2, as well as the analysis of PD-L1 expression in various feline malignant tumors, is required to develop effective feline ICI therapies. PD-L1 is a useful biomarker of the clinical response of tumors to anti-PD-1/PD-L1 antibody therapies in humans. In a clinical study of melanoma, NSCLC, colorectal cancer, renal-cell cancer, or castration-resistant prostate cancer, pretreatment tumor cell-surface expression of PD-L1 (as determined with immunohistochemistry [IHC]) was correlated with the response to PD-1 blockade, wherein 36% of patients with PD-L1-positive tumors experienced an objective response, while no (0%) patients with PD-L1-negative tumors responded to the treatment . Another study in NSCLC patients treated with anti-PD-1 antibody revealed that patients with a proportion score (percentages of neoplastic cells with membranous PD-L1 staining) of at least 50% had a higher response rate and longer progression-free/overall survival than those with a proportion score of less than 50% . Accordingly, the successful application of anti-PD-1/PD-L1 antibody drugs requires immunohistochemical assessment of PD-L1 for the selection of eligible patients. Although PD-L1 IHC has some limitations as a predictive biomarker and other biomarkers, such as tumor mutational burden and microsatellite instability, have been proposed , the establishment of a sensitive and robust PD-L1 IHC for feline tumors is of particular importance for informed selection of cat subjects that are most likely to benefit from anti-PD-1/PD-L1 therapies. To explore the feasibility of anti-PD-1/PD-L1 therapy for feline tumors, we first determined the complete CDS of feline PD-L1 and PD-L2 mRNA and compared the deduced amino acid sequences with known orthologs in other mammalian species. Recombinant proteins of feline PD-1, PD-L1, and PD-L2 were prepared for molecular characterization, and receptor–ligand binding was assessed in an expressing cell-based assay. Then, an anti-feline PD-L1 monoclonal antibody (CL1Mab-7) was established and its applicability for expression analysis was evaluated by flow cytometry using feline macrophage and mammary adenocarcinoma cell lines. Finally, IHC using CL1Mab-7 was developed to probe PD-L1 expression in various feline malignant tumors. Feline samples The use of animal-derived samples throughout the study was approved by the Institutional Animal Care and Use Committee, Faculty of Veterinary Medicine, Hokkaido University (approval #20–0093), which has been fully accredited by the Association for Assessment and Accreditation of Laboratory Animal Care International. No animal was sacrificed for this study. Testis tissue samples were freshly resected from a mixed-breed cat under general anesthesia at the Veterinary Teaching Hospital, Faculty of Veterinary Medicine, Hokkaido University. Peripheral blood samples were obtained from healthy cats (mixed breed) maintained at the Experimental Animal Facility, Faculty of Veterinary Medicine, Hokkaido University or a veterinary hospital in Sapporo, Japan. Formalin-fixed and paraffin-embedded (FFPE) feline tissues were obtained from a commercial pathology laboratory (North Lab, Hokkaido, Japan). Cell culture Peripheral blood mononuclear cells (PBMCs) were obtained from heparinized blood samples by density gradient centrifugation on Percoll (GE Healthcare, Buckinghamshire, UK) and cultured in RPMI 1640 medium (Sigma-Aldrich, St. Louis, MO, USA) supplemented with 10% fetal bovine serum (FBS; Thermo Fisher Scientific, Waltham, MA, USA), 2 mM L-glutamine, 200 μg/mL streptomycin, 200 U/mL penicillin (Thermo Fisher Scientific), and 5 μg/mL concanavalin A (ConA; Sigma-Aldrich) for 10 h at 37°C and 5% CO 2 . ExpiCHO-S cells (Thermo Fisher Scientific) were cultured on an orbital shaker (125 rpm) at 37°C and 8% CO 2 in ExpiCHO Expression Medium (Thermo Fisher Scientific). The feline macrophage cell line Fcwf-4 [Fcwf] (ATCC CRL-2787) was cultured in Eagle’s Minimum Essential Medium (ATCC, Manassas, VA, USA) containing 10% FBS (Thermo Fisher Scientific) at 37°C and 5% CO 2 . The feline mammary adenocarcinoma cell lines FKNp, FMCp, FMCm, FYMp, FONp, and FONm were cultured in RPMI 1640 medium (Sigma-Aldrich) supplemented with 10% FBS (Thermo Fisher Scientific), 2 mM L-glutamine, 200 μg/mL streptomycin, and 200 U/mL penicillin (Thermo Fisher Scientific) at 37°C and 5% CO 2 . Cell lines were cultured with 100 ng/mL recombinant feline interferon-gamma (IFN-γ; Kingfisher Biotech, St. Paul, MN, USA) for 24 h before analysis, where indicated. Nucleotide sequencing, alignment, and phylogenetic analysis To determine the complete CDS of feline PD-L1 , we designed primers for rapid amplification of cDNA ends (RACE; fePD-L1_5′GSP1–3, fePD-L1_3′GSP1, and -2) based on a previously reported partial sequence of feline PD-L1 (GenBank accession number: EU246348.2). Total RNA was extracted from cat testes using TRI reagent (Molecular Research Center, Cincinnati, OH, USA), and 3′ and 5′ unknown sequences were amplified using 3′ and 5′ RACE System for Rapid Amplification of cDNA Ends (Thermo Fisher Scientific) and TaKaRa Ex Taq polymerase (Takara Bio, Shiga, Japan). The amplicons were purified, cloned into pGEM-T Easy Vector (Promega, Madison, WI, USA), and sequenced using GenomeLab GeXP Genetic Analysis System (SCIEX, Framingham, MA, USA). Based on the obtained sequence, a primer pair was designed to amplify the whole CDS of feline PD-L1 (fePD-L1_F and R), and polymerase chain reaction (PCR) was performed using TaKaRa Ex Taq polymerase (Takara Bio) and cDNA templates synthesized from ConA-stimulated feline PBMCs, as previously described . Nucleotide sequences of the amplicons were determined as described above. Similarly, to determine the complete CDS of feline PD-L2 , a primer pair was designed based on NM_001290244.1 (fePD-L2_F and R), PCR was performed using cDNAs of ConA-stimulated feline PBMCs, and the nucleotide sequences of the amplicons were determined as described above. The CDSs of feline PD-L1 and PD-L2 were translated into deduced amino acid sequences and aligned with known orthologs in other mammalian species using BioEdit . The signal peptide, transmembrane domain, and other conserved domains were predicted using SignalP 6.0 ( https://services.healthtech.dtu.dk/service.php?SignalP ), TMHMM 2.0 ( https://services.healthtech.dtu.dk/service.php?TMHMM-2.0 ), and CD-Search ( https://www.ncbi.nlm.nih.gov/Structure/cdd/wrpsb.cgi ), respectively. Potential N-linked glycosylation sites were predicted using NetNGlyc-1.0 ( https://services.healthtech.dtu.dk/service.php?NetNGlyc-1.0 ). The percentages of identical and positive matches between amino acid sequences were calculated using Protein BLAST (blastp; https://blast.ncbi.nlm.nih.gov/Blast.cgi?PAGE=Proteins ). Unrooted neighbor-joining phylogenetic trees were inferred using MEGA version 6.06 . Nucleotide sequences of feline PD-L1 and PD-L2 were submitted to the DDBJ/EMBL-Bank/GenBank database under accession numbers LC735019, LC735020, and LC735021. Primer sequences used in this study are listed in . Transient expression of recombinant proteins To prepare feline PD-1 or PD-L1-expressing cells, we constructed expression vectors for feline PD-1 or PD-L1 fused to the C-terminal enhanced green fluorescent protein (fePD-1–EGFP; fePD-L1–EGFP) using pEGFP-N2 vector (Clontech, Palo Alto, CA, USA). Nucleotide sequences encoding feline PD-1 (NM_001145510.1) and PD-L1 were amplified by PCR using specific primers with restriction enzyme cleavage sites (fePD-1–EGFP_F and R; fePD-L1–EGFP_F and R, ), and subcloned into the multicloning site of the pEGFP-N2 vector. The expression plasmids were purified using FastGene Xpress Plasmid PLUS Kit (Nippon Genetics, Tokyo, Japan) and stored at −20°C until use. ExpiCHO-S cells were transfected with the plasmid using ExpiFectamine CHO Transfection Kit (Thermo Fisher Scientific) and cultured for 24 h prior to analysis. The subcellular localization of fePD-1–EGFP and fePD-L1–EGFP was visualized using ZOE Fluorescent Cell Imager (Bio-Rad, Hercules, CA, USA). To prepare soluble feline PD-1, PD-L1, and PD-L2 proteins, we constructed expression vectors for feline PD-1, PD-L1, or PD-L2 extracellular region fused to the C-terminal rabbit IgG Fc region (fePD-1–Ig, fePD-L1–Ig, and fePD-L2–Ig) using pCXN2.1 vector (kindly provided by Dr. T. Yokomizo, Juntendo University, Tokyo, Japan) . Nucleotide sequences encoding the extracellular regions of feline PD-1 (1–169), PD-L1 (1–237), and PD-L2 (1–219) were amplified by PCR using specific primers with restriction enzyme cleavage sites (fePD-1–Ig_F and R, fePD-L1–Ig_F and R, and fePD-L2–Ig_F and R; ), and subcloned into the multicloning site of the pCXN2.1 vector containing a gene cassette encoding the Fc region of rabbit IgG. The expression plasmids were purified using FastGene Xpress Plasmid PLUS Kit (Nippon Genetics) or NucleoBond Xtra Midi (Takara Bio) and stored at −20°C until use. ExpiCHO-S cells were transfected with the plasmid using ExpiFectamine CHO Transfection Kit (Thermo Fisher Scientific) and cultured for 10 d. Recombinant fusion proteins were purified from the culture supernatant by protein A affinity chromatography using Ab-Capcher ExTra (ProteNova, Kagawa, Japan). After elution with Pierce IgG Elution Buffer (Thermo Fisher Scientific), the buffer was replaced with phosphate-buffered saline (PBS; FUJIFILM Wako Pure Chemical, Osaka, Japan) using PD MidiTrap G25 (Cytiva, Tokyo, Japan). The protein concentration was measured using Pierce BCA Protein Assay Kit (Thermo Fisher Scientific). Sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) was performed using 2× Laemmli sample buffer (Bio-Rad) prepared with 2-mercaptoethanol (reducing conditions; FUJIFILM Wako Pure Chemical) or distilled water (non-reducing conditions). Recombinant fusion proteins were incubated at 96°C for 5 min in sample buffer and separated by electrophoresis using SuperSep Ace 5–20% gradient gel (FUJIFILM Wako Pure Chemical). Precision Plus Protein Dual Color Standards (Bio-Rad) were used as molecular weight markers. Protein bands were visualized by Coomassie brilliant blue (CBB) staining using Quick-CBB kit (FUJIFILM Wako Pure Chemical). Monoclonal antibody against feline PD-L1 To establish the anti-feline PD-L1 monoclonal antibody CL1Mab-7, BALB/c mice (4-week-old, female; CLEA Japan, Tokyo, Japan) were immunized intraperitoneally with 100 μg fePD-L1–Ig formulated with Imject Alum Adjuvant (Thermo Fisher Scientific). After the second booster administration, splenocytes were harvested and fused with P3X63Ag8U.1 (ATCC CRL-1597) cells using PEG1500 (Roche Diagnostics, Indianapolis, IN, USA) to generate a hybridoma pool. Cell cloning was performed by limiting dilution and monoclonal antibodies were purified from the culture supernatant using Protein G Sepharose 4 Fast Flow (GE Healthcare). Isotype-specific secondary antibodies (Southern Biotech, Birmingham, AL, USA) were used to identify the CL1Mab-7 antibody (sub)class (mouse IgG 1 , κ). Flow cytometry To examine whether feline PD-1 binds to PD-L1/PD-L2, 2 × 10 5 fePD-1–EGFP- and fePD-L1–EGFP-expressing cells were incubated for 30 min with 10 μg/mL of either fePD-1–Ig, fePD-L1–Ig, or fePD-L2–Ig at room temperature (RT), followed by another 30 min incubation with Alexa Fluor 647-conjugated F(ab′) 2 -goat anti-rabbit IgG (H+L) secondary antibody (Thermo Fisher Scientific). Rabbit IgG (Southern Biotech) was used as a control protein. Cell fluorescence was analyzed using BD FACSLyric system (BD Biosciences, Franklin Lakes, NJ, USA). To examine PD-L1 expression on cell lines, 2 × 10 5 cells were incubated for 30 min with 10 μg/mL CL1Mab-7 or mouse IgG 1 κ isotype-matched control antibody (15H6; Southern Biotech) at RT, followed by another 30 min incubation with Alexa Fluor 647-conjugated F(ab′) 2 -goat anti-mouse IgG (H+L) secondary antibody (Thermo Fisher Scientific). Cell fluorescence was analyzed using BD FACSLyric system (BD Biosciences). For fePD-1–EGFP- and fePD-L1–EGFP-expressing cells, only EGFP-positive cells were gated and subjected to further analysis. Immunohistochemistry We prepared 4-μm-thick sections of FFPE feline squamous cell carcinoma ( n = 5), mammary adenocarcinoma ( n = 5), fibrosarcoma ( n = 5), and renal cell carcinoma ( n = 3) tissue samples, and antigen retrieval was performed twice by microwave treatment for 5 min in Tris-EDTA buffer (pH 9.0; Aligent Technologies, Santa Clara, CA, USA). Endogenous peroxidase activity was blocked by incubating sections in methanol containing 0.3% hydrogen peroxide. Sections were incubated with 10 μg/mL CL1Mab-7 or mouse IgG 1 κ isotype-matched control antibody (MG1-45; BioLegend, San Diego, CA, USA) at RT for 30 min, followed by another 30 min incubation with Histofine simple stain MAX PO (MULTI) (Nichirei, Tokyo, Japan). A chromogenic reaction was developed using 3, 3′-diaminobenzidine tetrahydrochloride (Nichirei). Mayer’s hematoxylin was used as a counterstain. The use of animal-derived samples throughout the study was approved by the Institutional Animal Care and Use Committee, Faculty of Veterinary Medicine, Hokkaido University (approval #20–0093), which has been fully accredited by the Association for Assessment and Accreditation of Laboratory Animal Care International. No animal was sacrificed for this study. Testis tissue samples were freshly resected from a mixed-breed cat under general anesthesia at the Veterinary Teaching Hospital, Faculty of Veterinary Medicine, Hokkaido University. Peripheral blood samples were obtained from healthy cats (mixed breed) maintained at the Experimental Animal Facility, Faculty of Veterinary Medicine, Hokkaido University or a veterinary hospital in Sapporo, Japan. Formalin-fixed and paraffin-embedded (FFPE) feline tissues were obtained from a commercial pathology laboratory (North Lab, Hokkaido, Japan). Peripheral blood mononuclear cells (PBMCs) were obtained from heparinized blood samples by density gradient centrifugation on Percoll (GE Healthcare, Buckinghamshire, UK) and cultured in RPMI 1640 medium (Sigma-Aldrich, St. Louis, MO, USA) supplemented with 10% fetal bovine serum (FBS; Thermo Fisher Scientific, Waltham, MA, USA), 2 mM L-glutamine, 200 μg/mL streptomycin, 200 U/mL penicillin (Thermo Fisher Scientific), and 5 μg/mL concanavalin A (ConA; Sigma-Aldrich) for 10 h at 37°C and 5% CO 2 . ExpiCHO-S cells (Thermo Fisher Scientific) were cultured on an orbital shaker (125 rpm) at 37°C and 8% CO 2 in ExpiCHO Expression Medium (Thermo Fisher Scientific). The feline macrophage cell line Fcwf-4 [Fcwf] (ATCC CRL-2787) was cultured in Eagle’s Minimum Essential Medium (ATCC, Manassas, VA, USA) containing 10% FBS (Thermo Fisher Scientific) at 37°C and 5% CO 2 . The feline mammary adenocarcinoma cell lines FKNp, FMCp, FMCm, FYMp, FONp, and FONm were cultured in RPMI 1640 medium (Sigma-Aldrich) supplemented with 10% FBS (Thermo Fisher Scientific), 2 mM L-glutamine, 200 μg/mL streptomycin, and 200 U/mL penicillin (Thermo Fisher Scientific) at 37°C and 5% CO 2 . Cell lines were cultured with 100 ng/mL recombinant feline interferon-gamma (IFN-γ; Kingfisher Biotech, St. Paul, MN, USA) for 24 h before analysis, where indicated. To determine the complete CDS of feline PD-L1 , we designed primers for rapid amplification of cDNA ends (RACE; fePD-L1_5′GSP1–3, fePD-L1_3′GSP1, and -2) based on a previously reported partial sequence of feline PD-L1 (GenBank accession number: EU246348.2). Total RNA was extracted from cat testes using TRI reagent (Molecular Research Center, Cincinnati, OH, USA), and 3′ and 5′ unknown sequences were amplified using 3′ and 5′ RACE System for Rapid Amplification of cDNA Ends (Thermo Fisher Scientific) and TaKaRa Ex Taq polymerase (Takara Bio, Shiga, Japan). The amplicons were purified, cloned into pGEM-T Easy Vector (Promega, Madison, WI, USA), and sequenced using GenomeLab GeXP Genetic Analysis System (SCIEX, Framingham, MA, USA). Based on the obtained sequence, a primer pair was designed to amplify the whole CDS of feline PD-L1 (fePD-L1_F and R), and polymerase chain reaction (PCR) was performed using TaKaRa Ex Taq polymerase (Takara Bio) and cDNA templates synthesized from ConA-stimulated feline PBMCs, as previously described . Nucleotide sequences of the amplicons were determined as described above. Similarly, to determine the complete CDS of feline PD-L2 , a primer pair was designed based on NM_001290244.1 (fePD-L2_F and R), PCR was performed using cDNAs of ConA-stimulated feline PBMCs, and the nucleotide sequences of the amplicons were determined as described above. The CDSs of feline PD-L1 and PD-L2 were translated into deduced amino acid sequences and aligned with known orthologs in other mammalian species using BioEdit . The signal peptide, transmembrane domain, and other conserved domains were predicted using SignalP 6.0 ( https://services.healthtech.dtu.dk/service.php?SignalP ), TMHMM 2.0 ( https://services.healthtech.dtu.dk/service.php?TMHMM-2.0 ), and CD-Search ( https://www.ncbi.nlm.nih.gov/Structure/cdd/wrpsb.cgi ), respectively. Potential N-linked glycosylation sites were predicted using NetNGlyc-1.0 ( https://services.healthtech.dtu.dk/service.php?NetNGlyc-1.0 ). The percentages of identical and positive matches between amino acid sequences were calculated using Protein BLAST (blastp; https://blast.ncbi.nlm.nih.gov/Blast.cgi?PAGE=Proteins ). Unrooted neighbor-joining phylogenetic trees were inferred using MEGA version 6.06 . Nucleotide sequences of feline PD-L1 and PD-L2 were submitted to the DDBJ/EMBL-Bank/GenBank database under accession numbers LC735019, LC735020, and LC735021. Primer sequences used in this study are listed in . To prepare feline PD-1 or PD-L1-expressing cells, we constructed expression vectors for feline PD-1 or PD-L1 fused to the C-terminal enhanced green fluorescent protein (fePD-1–EGFP; fePD-L1–EGFP) using pEGFP-N2 vector (Clontech, Palo Alto, CA, USA). Nucleotide sequences encoding feline PD-1 (NM_001145510.1) and PD-L1 were amplified by PCR using specific primers with restriction enzyme cleavage sites (fePD-1–EGFP_F and R; fePD-L1–EGFP_F and R, ), and subcloned into the multicloning site of the pEGFP-N2 vector. The expression plasmids were purified using FastGene Xpress Plasmid PLUS Kit (Nippon Genetics, Tokyo, Japan) and stored at −20°C until use. ExpiCHO-S cells were transfected with the plasmid using ExpiFectamine CHO Transfection Kit (Thermo Fisher Scientific) and cultured for 24 h prior to analysis. The subcellular localization of fePD-1–EGFP and fePD-L1–EGFP was visualized using ZOE Fluorescent Cell Imager (Bio-Rad, Hercules, CA, USA). To prepare soluble feline PD-1, PD-L1, and PD-L2 proteins, we constructed expression vectors for feline PD-1, PD-L1, or PD-L2 extracellular region fused to the C-terminal rabbit IgG Fc region (fePD-1–Ig, fePD-L1–Ig, and fePD-L2–Ig) using pCXN2.1 vector (kindly provided by Dr. T. Yokomizo, Juntendo University, Tokyo, Japan) . Nucleotide sequences encoding the extracellular regions of feline PD-1 (1–169), PD-L1 (1–237), and PD-L2 (1–219) were amplified by PCR using specific primers with restriction enzyme cleavage sites (fePD-1–Ig_F and R, fePD-L1–Ig_F and R, and fePD-L2–Ig_F and R; ), and subcloned into the multicloning site of the pCXN2.1 vector containing a gene cassette encoding the Fc region of rabbit IgG. The expression plasmids were purified using FastGene Xpress Plasmid PLUS Kit (Nippon Genetics) or NucleoBond Xtra Midi (Takara Bio) and stored at −20°C until use. ExpiCHO-S cells were transfected with the plasmid using ExpiFectamine CHO Transfection Kit (Thermo Fisher Scientific) and cultured for 10 d. Recombinant fusion proteins were purified from the culture supernatant by protein A affinity chromatography using Ab-Capcher ExTra (ProteNova, Kagawa, Japan). After elution with Pierce IgG Elution Buffer (Thermo Fisher Scientific), the buffer was replaced with phosphate-buffered saline (PBS; FUJIFILM Wako Pure Chemical, Osaka, Japan) using PD MidiTrap G25 (Cytiva, Tokyo, Japan). The protein concentration was measured using Pierce BCA Protein Assay Kit (Thermo Fisher Scientific). Sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) was performed using 2× Laemmli sample buffer (Bio-Rad) prepared with 2-mercaptoethanol (reducing conditions; FUJIFILM Wako Pure Chemical) or distilled water (non-reducing conditions). Recombinant fusion proteins were incubated at 96°C for 5 min in sample buffer and separated by electrophoresis using SuperSep Ace 5–20% gradient gel (FUJIFILM Wako Pure Chemical). Precision Plus Protein Dual Color Standards (Bio-Rad) were used as molecular weight markers. Protein bands were visualized by Coomassie brilliant blue (CBB) staining using Quick-CBB kit (FUJIFILM Wako Pure Chemical). To establish the anti-feline PD-L1 monoclonal antibody CL1Mab-7, BALB/c mice (4-week-old, female; CLEA Japan, Tokyo, Japan) were immunized intraperitoneally with 100 μg fePD-L1–Ig formulated with Imject Alum Adjuvant (Thermo Fisher Scientific). After the second booster administration, splenocytes were harvested and fused with P3X63Ag8U.1 (ATCC CRL-1597) cells using PEG1500 (Roche Diagnostics, Indianapolis, IN, USA) to generate a hybridoma pool. Cell cloning was performed by limiting dilution and monoclonal antibodies were purified from the culture supernatant using Protein G Sepharose 4 Fast Flow (GE Healthcare). Isotype-specific secondary antibodies (Southern Biotech, Birmingham, AL, USA) were used to identify the CL1Mab-7 antibody (sub)class (mouse IgG 1 , κ). To examine whether feline PD-1 binds to PD-L1/PD-L2, 2 × 10 5 fePD-1–EGFP- and fePD-L1–EGFP-expressing cells were incubated for 30 min with 10 μg/mL of either fePD-1–Ig, fePD-L1–Ig, or fePD-L2–Ig at room temperature (RT), followed by another 30 min incubation with Alexa Fluor 647-conjugated F(ab′) 2 -goat anti-rabbit IgG (H+L) secondary antibody (Thermo Fisher Scientific). Rabbit IgG (Southern Biotech) was used as a control protein. Cell fluorescence was analyzed using BD FACSLyric system (BD Biosciences, Franklin Lakes, NJ, USA). To examine PD-L1 expression on cell lines, 2 × 10 5 cells were incubated for 30 min with 10 μg/mL CL1Mab-7 or mouse IgG 1 κ isotype-matched control antibody (15H6; Southern Biotech) at RT, followed by another 30 min incubation with Alexa Fluor 647-conjugated F(ab′) 2 -goat anti-mouse IgG (H+L) secondary antibody (Thermo Fisher Scientific). Cell fluorescence was analyzed using BD FACSLyric system (BD Biosciences). For fePD-1–EGFP- and fePD-L1–EGFP-expressing cells, only EGFP-positive cells were gated and subjected to further analysis. We prepared 4-μm-thick sections of FFPE feline squamous cell carcinoma ( n = 5), mammary adenocarcinoma ( n = 5), fibrosarcoma ( n = 5), and renal cell carcinoma ( n = 3) tissue samples, and antigen retrieval was performed twice by microwave treatment for 5 min in Tris-EDTA buffer (pH 9.0; Aligent Technologies, Santa Clara, CA, USA). Endogenous peroxidase activity was blocked by incubating sections in methanol containing 0.3% hydrogen peroxide. Sections were incubated with 10 μg/mL CL1Mab-7 or mouse IgG 1 κ isotype-matched control antibody (MG1-45; BioLegend, San Diego, CA, USA) at RT for 30 min, followed by another 30 min incubation with Histofine simple stain MAX PO (MULTI) (Nichirei, Tokyo, Japan). A chromogenic reaction was developed using 3, 3′-diaminobenzidine tetrahydrochloride (Nichirei). Mayer’s hematoxylin was used as a counterstain. Characterization of feline PD-L1 and PD-L2 To date, the complete nucleotide sequences of feline PD-L1 and PD-L2 mRNAs have not been reported in the literature, whereas only a partial sequence of PD-L1 and a complete sequence of PD-1 have been described . We first determined the mRNA sequences of feline PD-L1 and PD-L2 using cDNAs synthesized from PBMCs of mixed-breed cats. The complete CDS of feline PD-L1 and PD-L2 was 876 and 819 bp in length, encoding 291 and 272 polypeptide sequences, respectively. The PD-L1 mRNA sequence was identical among four individual cats, whereas the PD-L2 mRNA sequence contained a single nucleotide polymorphism between the two cats tested (C520A; silent mutation). The deduced amino acid sequence of feline PD-L1 and PD-L2 showed high sequence identity with orthologs in other mammalian species (66–83%, ). Both feline PD-L1 and PD-L2 were predicted to contain a signal peptide, two immunoglobulin superfamily (IgSF) domains, and a transmembrane domain (Figs and ). In the phylogenetic analyses, feline PD-L1 and PD-L2 formed a cluster with orthologs in Perissodactyla (e.g., cattle and pig) and Carnivora (dog), and were relatively distant in relation to Primates (human and rhesus macaque) and Rodentia (mouse and rat; Figs and ). Characterization of feline PD-1, PD-L1, and PD-L2 Next, we expressed recombinant feline PD-1 and PD-L1 as C-terminal EGFP-fusion proteins and examined their subcellular localization using fluorescence microscopy. In both fePD-1–EGFP- and fePD-L1–EGFP-expressing cells, EGFP fluorescence was concentrated at the cell membrane, indicating cell surface expression of the recombinant fusion proteins . To examine the binding between PD-1 and PD-L1/PD-L2, the extracellular regions of PD-1, PD-L1, and PD-L2 were expressed as soluble rabbit IgG Fc fusion proteins. Purified fePD-1–Ig, fePD-L1–Ig, and fePD-L2–Ig migrated at approximately 60 kDa and 140 kDa under reducing and non-reducing conditions, respectively, suggesting dimer formation by a disulfide bond at the hinge region of rabbit IgG Fc under non-reducing conditions . The theoretical molecular weights (as monomers) of fePD-1–Ig, fePD-L1–Ig, and fePD-L2–Ig, calculated from the deduced amino acid sequences, were 42.3, 50.9, and 48.3 kDa, respectively. Because four, five, and five potential N-glycosylation sites were found in the extracellular regions of feline PD-1, PD-L1, and PD-L2, respectively, these proteins were expected to be highly glycosylated, which influences migration in SDS-PAGE. Then, the bindings of Fc-fusion proteins to fePD-1–EGFP- and fePD-L1–EGFP-expressing cells were assessed by flow cytometry. As expected, both fePD-L1–Ig and fePD-L2–Ig were bound to fePD-1–EGFP-expressing cells, and only fePD-1–Ig was bound to fePD-L1–EGFP-expressing cells , suggesting receptor–ligand interactions between feline PD-1 and PD-L1/PD-L2. PD-L1 expression in feline macrophage and mammary adenocarcinoma cell lines To investigate PD-L1 expression in feline tumors, the anti-feline PD-L1 monoclonal antibody CL1Mab-7 was established by immunizing mice with fePD-L1–Ig. The binding specificity of CL1Mab-7 was confirmed by flow cytometry, in which CL1Mab-7 reacted with fePD-L1–EGFP-expressing cells, while no binding was observed in control EGFP-expressing cells (mock, transfected with an empty vector; ). Then, we tested whether CL1Mab-7 could detect the native PD-L1 protein expressed on the surface of feline cells. Consistent with previous reports showing that monocytes/macrophages constitutively express PD-L1 and IFN-γ is a potent inducer of PD-L1 expression , CL1Mab-7 showed slight binding to Fcwf-4 cells, and its binding was increased by treatment with IFN-γ . We next examined PD-L1 expression on feline mammary adenocarcinoma cell lines using CL1Mab-7 via flow cytometry. Among the six cell lines, FKNp, FMCm, FYMp, FONp, and FONm expressed PD-L1 at various levels; IFN-γ stimulation enhanced PD-L1 expression on these five cell lines. The other cell line, FMCp, did not express PD-L1 even in the presence of IFN-γ . PD-L1 expression in feline malignant tumor tissues Finally, PD-L1 IHC for FFPE feline tumor tissue was established using CL1Mab-7. In contrast to the isotype-matched control antibody (mouse IgG 1 ), CL1Mab-7 produced specific positive signals in feline squamous cell carcinoma . Among the five squamous cell carcinoma, five mammary adenocarcinoma, five fibrosarcoma, and three renal cell carcinoma samples tested, PD-L1 was detected in tumor cells of five (100%), four (80%), five (100%), and three (100%) samples, respectively . The normal tubular epithelium was also positive for PD-L1 in renal cell carcinoma specimens. Tumor cells were stained intracellularly and showed faint membrane staining in most specimens, with stromal tissues surrounding the tumor cells being mostly negative for PD-L1. PD-L1 and PD-L2 To date, the complete nucleotide sequences of feline PD-L1 and PD-L2 mRNAs have not been reported in the literature, whereas only a partial sequence of PD-L1 and a complete sequence of PD-1 have been described . We first determined the mRNA sequences of feline PD-L1 and PD-L2 using cDNAs synthesized from PBMCs of mixed-breed cats. The complete CDS of feline PD-L1 and PD-L2 was 876 and 819 bp in length, encoding 291 and 272 polypeptide sequences, respectively. The PD-L1 mRNA sequence was identical among four individual cats, whereas the PD-L2 mRNA sequence contained a single nucleotide polymorphism between the two cats tested (C520A; silent mutation). The deduced amino acid sequence of feline PD-L1 and PD-L2 showed high sequence identity with orthologs in other mammalian species (66–83%, ). Both feline PD-L1 and PD-L2 were predicted to contain a signal peptide, two immunoglobulin superfamily (IgSF) domains, and a transmembrane domain (Figs and ). In the phylogenetic analyses, feline PD-L1 and PD-L2 formed a cluster with orthologs in Perissodactyla (e.g., cattle and pig) and Carnivora (dog), and were relatively distant in relation to Primates (human and rhesus macaque) and Rodentia (mouse and rat; Figs and ). Next, we expressed recombinant feline PD-1 and PD-L1 as C-terminal EGFP-fusion proteins and examined their subcellular localization using fluorescence microscopy. In both fePD-1–EGFP- and fePD-L1–EGFP-expressing cells, EGFP fluorescence was concentrated at the cell membrane, indicating cell surface expression of the recombinant fusion proteins . To examine the binding between PD-1 and PD-L1/PD-L2, the extracellular regions of PD-1, PD-L1, and PD-L2 were expressed as soluble rabbit IgG Fc fusion proteins. Purified fePD-1–Ig, fePD-L1–Ig, and fePD-L2–Ig migrated at approximately 60 kDa and 140 kDa under reducing and non-reducing conditions, respectively, suggesting dimer formation by a disulfide bond at the hinge region of rabbit IgG Fc under non-reducing conditions . The theoretical molecular weights (as monomers) of fePD-1–Ig, fePD-L1–Ig, and fePD-L2–Ig, calculated from the deduced amino acid sequences, were 42.3, 50.9, and 48.3 kDa, respectively. Because four, five, and five potential N-glycosylation sites were found in the extracellular regions of feline PD-1, PD-L1, and PD-L2, respectively, these proteins were expected to be highly glycosylated, which influences migration in SDS-PAGE. Then, the bindings of Fc-fusion proteins to fePD-1–EGFP- and fePD-L1–EGFP-expressing cells were assessed by flow cytometry. As expected, both fePD-L1–Ig and fePD-L2–Ig were bound to fePD-1–EGFP-expressing cells, and only fePD-1–Ig was bound to fePD-L1–EGFP-expressing cells , suggesting receptor–ligand interactions between feline PD-1 and PD-L1/PD-L2. To investigate PD-L1 expression in feline tumors, the anti-feline PD-L1 monoclonal antibody CL1Mab-7 was established by immunizing mice with fePD-L1–Ig. The binding specificity of CL1Mab-7 was confirmed by flow cytometry, in which CL1Mab-7 reacted with fePD-L1–EGFP-expressing cells, while no binding was observed in control EGFP-expressing cells (mock, transfected with an empty vector; ). Then, we tested whether CL1Mab-7 could detect the native PD-L1 protein expressed on the surface of feline cells. Consistent with previous reports showing that monocytes/macrophages constitutively express PD-L1 and IFN-γ is a potent inducer of PD-L1 expression , CL1Mab-7 showed slight binding to Fcwf-4 cells, and its binding was increased by treatment with IFN-γ . We next examined PD-L1 expression on feline mammary adenocarcinoma cell lines using CL1Mab-7 via flow cytometry. Among the six cell lines, FKNp, FMCm, FYMp, FONp, and FONm expressed PD-L1 at various levels; IFN-γ stimulation enhanced PD-L1 expression on these five cell lines. The other cell line, FMCp, did not express PD-L1 even in the presence of IFN-γ . Finally, PD-L1 IHC for FFPE feline tumor tissue was established using CL1Mab-7. In contrast to the isotype-matched control antibody (mouse IgG 1 ), CL1Mab-7 produced specific positive signals in feline squamous cell carcinoma . Among the five squamous cell carcinoma, five mammary adenocarcinoma, five fibrosarcoma, and three renal cell carcinoma samples tested, PD-L1 was detected in tumor cells of five (100%), four (80%), five (100%), and three (100%) samples, respectively . The normal tubular epithelium was also positive for PD-L1 in renal cell carcinoma specimens. Tumor cells were stained intracellularly and showed faint membrane staining in most specimens, with stromal tissues surrounding the tumor cells being mostly negative for PD-L1. ICIs, such as anti-PD-1/PD-L1 antibodies, have become indispensable for the treatment of various malignancies in humans. Recently, ICIs have also been developed for use in veterinary medicine. In cattle infected with bovine leukemia virus, anti-PD-1/PD-L1 antibody treatment has been shown to reactivate antigen-specific T cell responses and reduce proviral load . The therapeutic potential of anti-PD-L1 antibody has also been suggested for bovine mycoplasmosis and Johne’s disease , implying its broad applicability for chronic infections in cattle. Moreover, in dogs with malignant melanoma and undifferentiated sarcoma, treatment with anti-PD-L1 antibody exhibited antitumor efficacy with acceptable safety profiles . Because PD-L1 expression is found in several canine malignancies, including squamous cell carcinoma, mammary adenocarcinoma, transitional cell carcinoma, and hemangiosarcoma , ICIs may also be effective for these tumors. Based on the similarity in tumorigenesis and the immune system among mammalian species, we hypothesized that immune checkpoint blockade could be an effective therapeutic strategy for feline tumor treatment. We found that the molecular characteristics of feline PD-1/PD-L1/PD-L2 are similar to those of its orthologs in other mammalian species in the aspects of evolutionary history, membranous expression, and receptor–ligand binding activities. The monoclonal antibody against feline PD-L1 CL1Mab-7 successfully detected the inducible expression of PD-L1 on feline cell lines, and IHC using the same antibody revealed PD-L1 expression in tissue samples of feline malignant tumors. These results strongly encourage further investigation of the PD-1/PD-L1 pathway as a therapeutic target for feline tumors. The high sequence identities and conserved domain structures of PD-1/PD-L1/PD-L2 among mammal species suggest a functional similarity of the PD-1 pathway as a suppressive mechanism in T cell-mediated immunity. Although immunosuppressive functional motifs (immunoreceptor tyrosine-based inhibitory motif [ITIM] and immunoreceptor tyrosine-based switch motif [ITSM]) are conserved in the cytoplasmic tail of feline PD-1 , its signal transmission and suppression of T cell functions are yet to be investigated experimentally. Moreover, whether reactivation of T cell responses can be achieved by blocking the PD-1/PD-L1 pathway is a topic of future research. To date, no blocking antibody has been developed for feline PD-1/PD-L1, which limits the functional assessment of the PD-1 pathway in cats. We are now in progress to identify blocking antibodies from the hybridoma pool of anti-PD-L1 monoclonal antibodies. IFN-γ treatment induced PD-L1 expression in feline macrophage and mammary adenocarcinoma cell lines, consistent with the findings in humans and dogs . IFN-γ signaling induces interferon-responsive factor-1 (IRF-1) via the Janus kinase/signal transducer and activator of transcription (JAK/STAT) pathway, which upregulates the transcription of PD-L1 . Although de novo synthesis of PD-L1 may be involved, the detailed mechanism of PD-L1 induction in feline cells remains unclear. In the human tumor microenvironment, various regulatory mechanisms of PD-L1 expression have been reported, including genomic alterations, epigenetic regulation, transcriptional regulation, post-transcriptional regulation, and post-translational modifications . Interestingly, one out of the six mammary adenocarcinoma cell lines (FMCp, ) did not express PD-L1 even after IFN-γ stimulation, suggesting a defect in the expression machinery of PD-L1 or IFN-γ signaling pathway. Because a similar portion (1/5, 20%) of mammary adenocarcinoma tissues was PD-L1-negative in our IHC results, investigation of the frequency and types of, for example, genetic mutations in mammary adenocarcinoma may provide further insights into the regulatory mechanisms of PD-L1 in feline tumors. We observed PD-L1 expression in the vast majority of feline tumor samples tested in this study. IHC using anti-PD-L1 antibody clone 22C3 revealed that 71.3%, 51.2%, and 38.1% of human squamous cell carcinomas, renal cancers, and breast cancers, respectively, were PD-L1-positive . The high rates of PD-L1 expression in the current study could be an artifact of a small sample size; therefore, our results need to be verified with a larger sample size of feline tumors. Fibrosarcoma is a rare soft tissue tumor in humans that, based on limited data, expresses PD-L1 at a low positive rate (15%) . Fibrosarcoma is more common in cats than in humans and is often diagnosed as a histologic subtype of feline injection site sarcomas (FISS) . The high frequency of PD-L1 expression in feline fibrosarcoma suggests that, in the unique etiopathogenesis of feline FISS (where chronic inflammation is thought to trigger tumorigenesis ), PD-L1 induction by inflammatory signaling (IFNs, IL-6, TNF-α, etc.) may play a pivotal role in the escape from adaptive immune responses. Nonetheless, the detection sensitivity of PD-L1 IHC using CL1Mab-7 seemed high and sufficient, highlighting its potential for use in the screening of cats that would respond to anti-PD-1/PD-L1 therapy. In conclusion, our molecular characterization of feline PD-1/PD-L1/PD-L2 suggests that the immunosuppressive functions of the PD-1 pathway are conserved in cats, and PD-L1 IHC revealed a potential immune evasion mechanism commonly exploited by feline tumors. Although whether feline T cell-mediated immunity can be reinvigorated by the PD-1/PD-L1 blockade remains to be elucidated, our results support further investigation of ICIs as potential immunotherapies for feline tumors. S1 Fig The original uncropped and unadjusted images for . (TIF) Click here for additional data file.
Predicting Prognosis and Platinum Resistance in Ovarian Cancer: Role of Immunohistochemistry Biomarkers
0c329d5a-6fc1-4bef-8bae-31f741388962
9916805
Anatomy[mh]
Ovarian cancer is considered the fifth leading cause of cancer-related death in women. Worldwide, an estimated 313,959 new ovarian cancer cases and 207,252 ovarian cancer-related deaths occurred in 2020 . Despite the improvement in treatment for ovarian cancer, survival trends remained poor due to chemoresistance and a lack of biomarkers to detect the disease early . Hence, ovarian cancer is often diagnosed at an advanced stage, with most new cases spreading beyond the primary site. Over the last 30 years, mortality rates from ovarian cancer have remained poor , with patients with advanced disease (Stage III and IV) having a 10-year survival rate of 10–30%. Epithelial ovarian cancer (EOC) originates from ovarian surface epithelium (mesothelium) and accounts for more than 85% of all ovarian tumours. The heterogeneity of EOC, which consists of several tumour subtypes with greatly divergent clinicopathologic characteristics and behaviour, poses a major challenge to understanding the pathophysiology of the disease. Various patient and tumour parameters, such as age, genetic makeup, and tumour traits including stage, grade, histologic subtype, and chemotherapy sensitivity, therefore, have an impact on the prognosis of ovarian cancer . High-grade serous ovarian cancer (HGSOC) is the most common and aggressive form of EOC, which accounts for about ~70% of all cases and it is the leading cause of cancer-related death among all gynaecological cancers worldwide . Less common types of epithelial ovarian cancers include: endometrioid carcinoma, which consists ofabout ~20% of EOC and occurs more commonly in women with endometriosis ; low-grade serous ovarian carcinoma (LGSOC), which is a slow-growing tumour that accounts for about 5% of EOC ; mucinous carcinoma, which is more distinct and tends to be large (around 8 inches or 20 cm) ; and ovarian clear cell carcinoma (OCCC) accounts for approximately 5% of all ovarian carcinomas and is characterized by a high recurrence rate . Optimal cytoreductive surgery and platinum-based chemotherapy using the combined carboplatin-paclitaxel regimen have been the standard treatments for EOC . Despite an initial good response to first-line therapy, the development of chemotherapy-resistant and refractory diseases ensues. Therefore, the sensitivity of chemotherapy has decreased with an increased relapse rate and, therefore, a decrease in long-term survival rate for ovarian cancer. It was shown that up to two-thirds of patients with advanced ovarian cancer experience cancer recurrence within 18 months from the time of diagnosis regardless of the first-line therapy . Patients undergo platinum chemotherapy are classified into platinum-sensitive or platinum-resistant according to the time from the end of treatment to the recurrence of the disease (platinum-free interval). Platinum resistance, defined as disease recurrence within 6 months of completion of first-line platinum-based chemotherapy, occurs in approximately 25% of cases and the median progression-free survival (PFS) is only 9–12 months on average . Conversely, platinum-sensitive patients have a PFS for up to 24 months . Currently, only paclitaxel, pegylated liposomal doxorubicin (PLD), and topotecan are approved by the US FDA to treat platinum-resistant ovarian cancer patients; however, the response rates are poor at 10% to 15% . The taxane analogues, oral etoposide, pemetrexed, and bevacizumab are additional medications with some effectiveness in platinum-resistant ovarian cancer . Unfortunately, second-line chemotherapy in patients with platinum-resistant ovarian cancer has not been found to be superior to current therapy in terms of progression-free survival or overall survival in randomised phase III trials . The tumour microenvironment in ovarian cancer tissues is associated with altered protein expression patterns, making it conceivably a site of interest to decipher protein profile patterns and alteration in disease development and treatment intervention . The discovery of molecular tumour traits linked to high-risk early-stage ovarian carcinomas would also enhance risk assessment, maybe have an impact on treatment selection, and direct the development of targeted therapies in the future. Immunohistochemistry (IHC) is a very sensitive and unique technique used to determine tissue constituents (the antigens) with the employment of specific antibodies that can be visualised via a microscope. It has shown to be a potent technique for the identification and use of biomarkers, an example of this role includes the Human Epidermal Growth Factor Receptor 2 (HER2) expression in breast cancer and gastroesophageal adenocarcinoma, in addition to the expression of mismatch repair (MMR) proteins in patients with colorectal adenocarcinoma or endometrial carcinoma . The immunohistochemical reactions have been used in different situations within the research or pathological context, the most important applications are: (1) histogenetic diagnosis ; (2) subtyping of neoplasia ; (3) characterisation of the primary site of malignant neoplasia’s ; (4) research for prognostic factors and therapeutic indications of some diseases ; and (5) discrimination of benign versus the malignant nature of certain cell proliferation . Although IHC-based platforms for assessing the tumour immune milieu are easily implemented in clinical settings, several IHC-based biomarkers have struggled to achieve therapeutic relevance due to a lack of validation and inaccurate clinical outcome prediction . This narrative review discusses the relevance of IHC-based biomarkers in predicting chemotherapy resistance and prognosis in ovarian cancer while also outlining the drawbacks of using IHC in clinical practice. Search Strategy of Review Keywords and terms of the major concepts for this review included: “Immunohistochemistry (IHC)”, Ovarian Cancer Subtypes”, “Ovarian Cancer Prognostic Value”, “Chemotherapy Resistance”, and “Ovarian Cancer Biomarkers”, which were developed and combined to form the search strategy. In the section entitled “Signature Alterations in Women with Chemotherapy Resistance” we included more specific keywords of ovarian cancer molecular pathways (e.g., “Tumor Mutation burden”, “DNA Repair Pathways”, and “Cell Cycle Related Genes”. The systematic search of Google Scholar, PubMed, Web of Science, and Sci-Hub databases were combined for relevant English-language publications from the time of their inception to October 2022. Results were merged using reference management software (Endnote X9; Thomson Reuters version 12.0.0.2401). The findings of relevant studies are summarized in and . Keywords and terms of the major concepts for this review included: “Immunohistochemistry (IHC)”, Ovarian Cancer Subtypes”, “Ovarian Cancer Prognostic Value”, “Chemotherapy Resistance”, and “Ovarian Cancer Biomarkers”, which were developed and combined to form the search strategy. In the section entitled “Signature Alterations in Women with Chemotherapy Resistance” we included more specific keywords of ovarian cancer molecular pathways (e.g., “Tumor Mutation burden”, “DNA Repair Pathways”, and “Cell Cycle Related Genes”. The systematic search of Google Scholar, PubMed, Web of Science, and Sci-Hub databases were combined for relevant English-language publications from the time of their inception to October 2022. Results were merged using reference management software (Endnote X9; Thomson Reuters version 12.0.0.2401). The findings of relevant studies are summarized in and . Chemotherapy resistance is linked to multiple mechanisms which include changes in the transport and cellular turnover of the drug as well as alterations in cytoplasmic defence systems and DNA repair mechanisms leading to the loss of treatment sensitivity . The conventional chemotherapy drugs exert their function not only via cellular machinery that controls the cell cycle, but also through molecular pathways that mediate programmed cell death or apoptosis . Various molecular markers including biomarkers derived from genomic abnormalities (such as gene mutations, copy number aberrations, and DNA methylation) have been associated with different types of EOC. Some of these biomarkers can predict disease prognosis and chemosensitivity. In addition, over the past ten years, numerous studies have found protein expression differences by examining a large number of ovarian tumour tissues . Ovarian cancers are often linked to the genetic mutations of the BRCA genes and p53 mutations which occur in 50–80% of epithelial ovarian cancer, amplification, and overexpression of HER2/neugene and the AKT2 gene in about 10–20% of the high-grade serous carcinoma and inactivation of p16 gene recorded in 10–17% of the epithelial ovarian cancers . Other suppressors and oncogenes, such as KRAS, PTEN , PIK 3CA , ARID1A , PPP2R1A PTEN, BCL2, MYC, BRAF , ERBB2 , CTNNB1 , and TGF-β have been involved in the tumorigenesis of ovarian carcinoma and chemotherapy resistance . While significant efforts have been made to understand the molecular processes underlying ovarian chemotherapy resistance, its pathogenesis and progression model remains unexplained. The identification of molecular signatures is becoming more important for individualized targeted ovarian cancer treatment. From a therapeutic standpoint, the discovery of biomarkers has a significant role in predicting the results of chemotherapy treatment, which is essential in assisting clinicians in weighing the possibility of chemotherapy resistance and predicting the quality of life after chemotherapy . Biomarkers are classified into two subtypes: prognostic and predictive markers. Prognostic indicators reveal disease outcome, while predictive markers refer to how well a patient responds to chemotherapy, which can be a therapeutic factor. To date, limited information is available for prognostic biomarkers associated with disease relapse and chemotherapy resistance in ovarian cancer. To improve the management of clinical outcomes; biomarkers must be specific and sensitive, and they should be convenient and inexpensive to be tested. Hence, studying the biomarkers within the tumour tissue microenvironment may yield better success for the potential discovery of relevant and critical biomarkers for prognostication and as therapeutic targets. Therefore, efforts are ongoing to identify IHC biomarkers as potential prognostic markers for ovarian cancer in the clinical setting. In this review, the findings from relevant studies on IHC biomarkers and their association with chemoresistance and prognosis are summarized in for all types of EOC and for subtypes HGSOC and OCCC. 3.1. Vascular Endothelial Growth Factor (VEGF) It is a signal protein that many cells express that encourages the growth of blood vessels. By regulating tumour development through its support of tumour angiogenesis and ascites formation through its stimulation of vascular permeability, VEGF plays a significant role during ovarian cancer . High levels of VEGF are linked with primary resistance to platinum-based chemotherapy, and the immunohistochemical level of VEGF expression is highly associated with platinum sensitivity and overall patient survival . In another study, the author demonstrated that patients with platinum resistance had a higher proportion of VEGF levels compared to those in the platinum-sensitive group (86% vs. 2%) . The median survival in the patient group with a high VEGF score was determined to be 11 months, compared to 32 months in the group with a low VEGF score, by the author . Consequently, VEGF expression was inversely connected with overall survival (OS) and strongly correlated with platinum resistance EOC ( p < 0.0001) . When compared to the chemo-resistant patient group, the VEGF levels in the chemo-sensitive patients were significantly lower . On the other hand, it was examined that the VEGF score in a multivariate regression model and found that the presence of VEGF in the tumour was a significant predictor of how well patients would respond to platinum-based chemotherapy. However, neither the stage nor the grade of the tumour, nor the patient’s age, were shown to be related to VEGF expression . 3.2. CD133 Often known as Prominin-1, the PROM1 gene on chromosome 5 encodes a five-transmembrane glycoprotein with a molecular weight of 97 kDa. CD133 is a putative marker for cancer stem cells in ovarian cancer, which have been known to predict resistance to chemotherapy . According to earlier research, CD133-positive cells were more aggressive and tumorigenic in vitro and/or in vivo than their CD133-negative progeny and were more resistant to chemotherapeutic therapy . Similarly, 31% of 400 ovarian cancer samples had CD133 expression, which, according to a log-rank test, was linked to both shorter OS times and shorter disease-free survival times ( p = 0.007 and P0.001, respectively) . The hypothesis that CD133 and cancer stem cells are related is supported by the finding that CD133 expression is a predictor of poor clinical outcomes for ovarian cancer patients . 3.3. P53 It is a tumour-suppressor gene located on the short arm of chromosome 17 , which regulates cell growth. One of the most often found genetic anomalies in human neoplasia is p53 gene mutations . Numerous studies have examined p53 immunoreactivity in ovarian carcinomas, and the majority of these have found a significant percentage of positivity, particularly in serous tumours . On the other hand, a recent investigation discovered functional p53 mutations in 50.8% of high-grade serous ovarian cancer (HGSOC) and 8.3% of low-grade ovarian cancer . In addition, p53 was much more expressed in HGSOC than in low-grade patients ( p = 0.005); whereas, only 18% (4 of 22) of low-grade cases showed 5+ staining and 64% (30 of 47) of HGSOC cases . The time to progression and OS were considerably reduced in patients with p53 mutations compared to those with normal wild-type p53; however, since p53 mutations were discovered in 56 percent (99 of 178) of the epithelial ovarian tumours ( p = 0.029 and p = 0.014). In 62 percent (110 of 178) of ovarian cancer tissues, p53 protein overexpression (>10 percent positively stained nuclei) was discovered . In cases with p53 overexpression, the time to progression and OS were shorter (cut-point, 10%: p = 0.071 and p = 0.056) . Individuals with p53 overexpression experienced resistance to adjuvant cisplatin or carboplatin treatment substantially more frequently ( p = 0.001) than patients with normal p53 . The effective induction of apoptosis by a functional p53 protein determines the sensitivity of tumour cells to various chemotherapeutic agents, and p53 loss can increase chemotherapy resistance ; however, opinions on the correlation between chemotherapy sensitivity and p53 status are still divided. 3.4. MIB-1/KI-67 The Ki-67 gene (10q25) is located on the long arm of human chromosome 10 . The quantification of the reactive expression of Ki-67 antigen using an immunohistochemical tool has been demonstrated to provide an estimate of the tumour’s proliferative capacity and, therefore, has been widely utilized as a reliable prognostic marker in almost all types of cancers including those of the lymphatic system, lung, brain, breast, cervix, uterus, ovary, and soft tissue sarcoma . The key feature of this biomarker is its expression is absent in the quiescent state of cells (G0) and expressed in all of the active cell cycle phases (G1, S, G2, and mitosis) in proliferating tissues . The strongest areas of Ki-67 immunostaining are assessed, and all discernible nuclear staining—regardless of intensity—is considered positive immunoreactivity. Normally, immunostaining is restricted to the nucleus, and only mitosis is associated with cytoplasmic positivity. Using a high-power microscope objective, the percentage of positively stained cells (x400) expressed is defined as the Ki-67 labelling index (Ki-67 LI). Numerous malignancies, including ovarian cancer, have been demonstrated to have a poor prognosis with high Ki-67 LI . Mindbomb E3 ubiquitin protein ligase1 (MIB-1 antibody) Immunohistochemistry (IHC) is a relatively new technique for determining the Proliferative Index (PI) of a neoplastic lesion . The monoclonal mouse antibody MIB-1 is the standard for demonstrating PI in formalin-fixed, paraffin-embedded specimens for the Ki-67 antigen. It reacts with the nuclear protein Ki-67 antigen, which comes in two isoforms with molecular masses of 345 and 395 kDa . Numerous studies have linked the presence of this IHC marker in EOC with other prognostic indicators such as histologic subtype, tumour grade, FIGO stage, treatment response, as well as with survival rates . In addition, the mean MIB-1 index in HGSOC was found to be 55.4% compared with 23.0% in low-grade ovarian serous cancer . Similarly, another study on the MIB-1 proliferative index (an independent predictor of lymph node metastasis) has shown a significantly lower value in low-grade ovarian cancer (16.3%) when compared with HGSOC (47.8%) . Elevated Ki-67 LI was linked to high-grade tumours (69.9%), high-grade serous tumours (65.34%), and advanced FIGO staging (70.6%) in a study involving 202 women . However, Ki-67 LI and CA 125 levels did not significantly correlate. The growth fraction of a tumour cell population can be determined with outstanding cost-effectiveness using the marker Ki-67. When paired with Ki-67 LI in the histopathology report, the histological grade and FIGO stage of EOC can aid in the diagnosis of subtype distinction, prognostication, determining whether adjuvant treatment is necessary, and survival analysis . In a study involving seventy-three patients with EOC , Ki-67 expression in combination with survivin and Topoisomerase IIα was evaluated by immunohistochemistry on formalin-fixed, paraffin-embedded tissue sections in relation to the response to chemotherapy. Nuclear staining for all antibodies was scored in a three-layer system and staining >10% was accepted as expression. It was found that Ki-67 was related to poor OS ( p = 0.005); however, there was no association between Ki-67 expression and histological subtype, stage or grade of ovarian cancer . It has been well established that Ki-67 immunostaining of human tumours has a diagnostic and prognostic significance. The Ki-67 immunostaining has been carried out on a variety of histological and cytological specimens, including frozen sections, smears, cell suspensions, etc. However, the limitation of this epitope is that it does not survive conventional histopathological fixation, such as in formaldehyde or alcohol. The regular inclusion of the Ki-67 LI/MIB-1 IHC marker as a diagnostic and prognostic feature in histopathology reports would pave the way for a better understanding of biological behaviour and the modification of treatment plans. The Ki-67 antigen/MIB-1 antibody reactive staining, therefore, can be employed as a diagnostic and predictive tool to direct the clinical care of ovarian cancer. Recent advances in methodological refinement have made Ki-67 antigen immunostaining a promising goal for PI . 3.5. The Mitotic Arrest Deficiency Protein 2 (MAD2) It is essential to the operation of the Spindle Assembly Checkpoint (SAC), which mediates the attachment of spindle microtubules to kinetochores on chromosomes and the separation of chromosomes during mitosis. The expression was substantially higher in those with no recurrence compared to those with recurrence (24 vs. 17, p = 0.023) . When the 41 cases were divided into low- and high-expression groups, the progression-free survival did not differ substantially between the two groups ( p = 0.0685), but the low-expression group had a shorter OS compared to the high-expression group ( p = 0.0188) . This suggests that MAD2 expression levels can predict susceptibility to anticancer medications and the likelihood of recurrence. In addition, in a study of the immunohistochemical score of MAD2 protein was negatively correlated with progression-free survival of women with HGSOC ( p = 0.0003), with a hazard ratio of 4.69. The reduced expression of MAD2 protein indicates a defective mitotic checkpoint, potentiates resistance to ovarian cancer cells that might eventually lead to a recurrence of the disease . The paclitaxel-induced activation of mitotic cell death is the function of MAD2; however, when MAD2 is downregulated, the cellular response to paclitaxel is diminished. 3.6. Check Point Kinase 2 (Chk2) It is a central key protein that mediates the response to genotoxic stress . A positive response to platinum-based chemotherapy is associated with high Chk2 expression. In a study of immunohistochemistry involving 125 women with advanced stage HGSOC having a residual disease of less than 2 cm of cancer after surgery, it was shown that high expression of Chk2 was related to good response to platinum-based chemotherapy (OR = 0.132, p = 0.014) compared to those with low Chk2 expression in their pre-treated ovarian cancer tissues . Similarly in a different study, Chk2 depletion reduced the platinum sensitivity of ovarian cancer cell lines, indicating that Chk2 should not be used as a therapeutic target in HGSOC patients because it eliminated the cisplatin-induced S-phase cell cycle arrest and increased long-term survival resistance to cisplatin . 3.7. Insulin-like Growth Factor 1 Receptor (IGF-1R) It is a tyrosine kinase commonly found to be overexpressed in ovarian cancer women . It is crucial for cell growth, differentiation, and death, and it may contribute to the development of cancer . Numerous epidemiological studies looked at the connection between circulating IGF-1 levels and the risk of ovarian cancer, but no correlation was identified. An investigation of the IGF-1R levels’ prognostic significance in a small cohort of 19 women with HGSOC found a significant rise in the expression of the IGF-1R transcript following six cycles of neoadjuvant chemotherapy (NACT) compared to chemo-naïve tumour tissues. The findings unveiled that the women with higher IGF-1R expression had prolonged disease-free survival (DFS: 26.7 months) compared to the ones with lower IGF-1R expression (DFS: 11.9 months). Increased plasma IGF-I levels were also more frequently found in well-differentiated epithelial ovarian carcinoma ( p = 0.0047) . 3.8. Prostaglandin D2 (PGD2) Prostaglandins are lipid-based arachidonic acid derivatives that regulate follicle-stimulating hormone (FSH)-mediated proliferation, differentiation, and steroidogenic activity in the normal ovary . IHC evaluation of PGD2 is an independent marker of good prognosis in HGSOC. In a study of 114 HGSOC patients the IHC analysis revealed that a high expression of PGD2 correlated with improved disease-free survival ( p = 0.009), the lack of relapse ( p = 0.039), and platinum-based therapy sensitivity ( p = 0.016). Therefore, this study concluded that the presence of PGD2 on ovarian tissue predicted a low risk of relapse when analyzed using multiple cox regression (hazard ratio, 0.37; p = 0.002), and therefore, was a good prognostic factor for women with HGSOC . 3.9. Endonuclease Non-Catalytic Subunit (ERCC1) It is a protein critical in a nucleotide excision repair pathway. It was shown that NACT-treated HGSOC tissues showed a two-fold increase in ERCC1 expression compared to chemo-naïve HGSOC tissues ( p < 0.0001) . The neoadjuvant group with high ERCC1 had a mean overall survival of 141.6 months, which was noticeably longer than the ERCC1-absence group’s survival of 61 months ( p = 0.028) . A relationship between ERCC1 expression and tumour-infiltrating lymphocytes (TILs) is also suggested by this author, but further research is needed to confirm this. ERCC1, hence, can act as a potential biomarker that can predict platinum response and OS in ovarian cancer women undergoing NACT . 3.10. Notch Receptor 3 It is a bona fide oncogene, altered in approximately 20% of HGSOC women. It has a definite role in both the acquirement of chemoresistance and disease progression. Several studies have found Notch 3 as a significant prognostic factor in women with relapsed tumours . In a study included 25 women with HGSOC have been followed up for a duration of 32 months. Out of all cases, nine (36%) clinically displayed tumour recurrence with acquired chemoresistance to first-line chemotherapy agents consisting of cisplatin and paclitaxel. The total cohort were divided into two: the higher-expressing group (n = 12) and the lower-expressing group (n = 13). The higher expression of Notch 3 (>2 fold) was significantly associated with chemo-resistant serous carcinomas compared to the low-expressing group (58.3% vs. 15.4%), suggestive of the role of Notch 3 as a possibly valuable predictive marker for chemoresistance . In addition, it was highlighted that the possible association of Notch 3 and stage III/IV of ovarian adenocarcinoma with respect to poorer progression-free survival where 3 out of 5 women with a relapse (within 6 months post-first-line chemotherapy) had Notch 3 overexpression prior to chemotherapy treatment . 3.11. Glypican-3 (GPC3) It is a heparan sulphate proteoglycan on the cell surface that attaches to the cell membrane through glycosylphosphatidylinositol anchors . Its product is thought to interact with a variety of morphogenic or growth factors to control cellular development and apoptosis . In a study of 213 cases with different subtypes of EOC, the GPC3 positive expression was demonstrated mainly (44%) in ovarian clear cell carcinoma (OCCC), and less in other subtypes including mucinous (4%), endometrioid (5%), and HGSOC (11%) . Although GPC3 expression was significantly associated with poor overall survival in advanced (stage III/IV) OCCC, there was a negative correlation between GPC3 expression and clinicopathological aspects, such as tumour stage, lymph node spread peritoneal metastasis, and death rate . Hence, GPC3 may be a potential marker for the advanced stages of OCCC . 3.12. Aldehyde Dehydrogenase (ALDH1) Is a catalase that oxidizes aldehyde-containing molecules and ALDH; therefore, it plays an important role in cellular homeostasis. In cancer cells, it assists in both energy production via retinoic acid (RA) synthesis, and in the deactivation of drug molecules (by action on the aldehyde group) . A growing body of research indicates that ALDH may not only be utilised as a marker for stem cells but may also control cellular processes such as self-renewal, growth, differentiation, and radiation and drug resistance. According to recent research, both healthy and cancer cells with high levels of ALDH1 activity have the capacity to serve as stem cells, as well as the capacity for self-renewal and stress resistance . Many studies have shown that high expression of ALDH1 is found to be associated with increased capacity for sphere formation, tumorigenicity, and invasiveness . There is strong evidence to support the idea that the specific isoform(s) of ALDH expressed in tissue determines its function as a stem cell marker. The cytosolic enzyme needed for RA production belongs to the ALDH1 family, and ALDH1A1 have received a lot of attention . Both normal stem cells and stem-like cells that initiate tumours have their biological functions controlled by ALDH1A1, which encourages tumour growth and chemotherapy resistance . It was reported that ovarian cancer can also be prognosticated poorly based on high ALDH1A1 expression in the immunohistochemistry of the ovarian tissue. Higher ALDH1 expression levels in ovary cancer cases were found to be associated with a worse prognosis in both serous ( p = 0.006) and clear cell adenocarcinoma ( p = 0.047) cases, according to immunohistochemical staining of a total of 123 EOC tissues . ALDH1, hence, is a marker for ovarian cancer stem and the degree of ALDH1 expression may be a potential diagnostic for predicting a bad prognosis. 3.13. Homeobox A10 (HOXA10) HOXA10 is a homeobox allotype gene in HOX family. In a single-center study conducted at Fudan Hospital, China, 29 women were evaluated for HOXA10 expression and correlation with survival . According to Kaplan-Meier analysis, HOXA10 expression was negatively correlated with the 5-year survival rate, which was only 30% in the 20 women with positive HOXA10 expression and 55.6 percent in the nine women with absent HOXA10 expression . This suggests that the Human homeobox gene A10 may be used as a prognostic factor in ovarian cancer and that HOXA10 could be a therapeutic target for this type of cancer. 3.14. AT-Rich Interaction Domain 1A (ARID1A) BAF250a, the protein encoded by ARID1A, is one of the accessory subunits of the SWI–SNF complex chromatin remodelling complex which modulates the repression/de-repression of several promoters, and it acts as a tumour suppressor by nature . Immunohistochemical analysis of 53 ovarian clear cell carcinoma (OCCC) patient samples was the first report demonstrating that low levels of ARID1A protein can serve as a marker of poor outcome in OCCC patients, whereby out of the 53 patients, eight with low ARID1A expression had shorter progression-free survival than those with high expression ( p = 0.044, log-rank test) . Furthermore, platinum-based chemotherapy significantly decreased OS ( p = 0.03) and progression-free survival ( p = 0.01) for nine patients with loss of ARID1A expression compared to those with positive ARID1A expression in a cohort of 60 patients with epithelial ovarian cancer diagnosed at stages I to IV . This demonstrates that low levels of ARID1B protein can serve as a marker of poor outcomes in patients with ovarian cancer. 3.15. Hepatocyte Nuclear Factor-1β (HNF-1β) HNF-1β is a homeodomain-containing transcription factor which binds to the same DNA sequence as homodimers or heterodimers. Numerous genes are known to be regulated by it, either directly or indirectly . The majority of ovarian cancer clear cell carcinoma (OCCC) over-express HNF-1β; therefore, HNF1β over-expression is likely to be helpful for the diagnosis of OCCC. The presence of HNF1-binding sites at numerous OCCC-specific hypomethylated genes further supports this notion. In addition, a motif analysis found that HNF1 binding motifs are significantly enriched in genes that comprise the OCCC signature . In two separate large cohort studies, one on an Australian population known as Australian ovarian cancer women (AOCS) and in another study on a Japanese population known as high-volume Japanese university clinical network (JIKEI), the over-expression of HNF-1β was found to be in OCCC subtypes and have been associated with significantly longer PFS ( p = 0.01) and OS ( p = 0.02) . It was demonstrated that HNF-1β increases OCCC cell survival by enhancing Reactive Oxygen Species (ROS) resistance. Hence, HNF1β inhibition with some type of inhibitor, such as the microRNA mir-802, may yield a therapeutic effect by annulling ROS resistance . The HNF1β-induced cell survival according to the author was shown to be glucose-dependent . Therefore, glucose metabolism may be a therapeutic target in OCCC with high HNF-1β expression. 3.16. Cyclooxygenase-1 and 2 (COX-1 and COX- 2) Cyclooxygenase plays important roles in catalysing rate-limiting reactions for prostaglandin and thromboxane synthesis and is often dysregulated in neoplastic tissues. COX-1 and COX-2 were often expressed in every type of epithelial ovarian cancer, suggesting that each may contribute to cancer development or progression . The COX-2 expression was analysed by IHC in 87 women with ovarian cancer and Ferrandina et al., found that the percentage of positive COX-2 expression was significantly higher in non-responders than in patients responding to treatment ( p = 0.043 and p = 0.0018, respectively) . This is the first study to show a link between COX-2 and decreased chemotherapy susceptibility and poor outcomes in a large cohort of patients with primary advanced ovarian cancer who had the detectable disease at initial surgery . Another study by Li et al., found that COX-1 protein was found over-expressed in 69.3% of the total 137 ovarian cancers . COX-2 was present in 70.8% of all epithelial cancers’ subtypes, with 63.9% of the primary cancers and 81.5% of the metastatic cancers, positive for COX-2 . The immunostaining for COX-2 was frequently found at the advancing margin of tumour invasion or in new metastatic, whereas the COX-1 protein overexpression was observed in ovarian surface epithelial cells, especially that of inclusion cysts . Assessment of COX-2 status may provide additional information to identify patients with ovarian cancer who have a low chance of responding to chemotherapy and are potential candidates for personalized treatments. 3.17. Breast Cancer Gene 1 (BRAC1) BRCA1 is reported to be downregulated in 15–72% of EOC cases . Several retrospective studies report that BRCA1-mutated EOC women have a survival advantage attributed to the enhanced response to platinum chemotherapy . In a study of 292 women with ovarian cancer, it was observed that 120 of the cases had ≤10% of the IHC nuclear staining of BRCA1 and 59% of women expressed >10% of BRCA1 staining which was classified as overexpression of BRCA1 . Women with low BRAC1 expression had improved median OS (41.5 months) and PFS (16.3 months) compared to women with overexpression of BRCA1 in ovarian tumour tissues, where the median OS is 28.7 months and PFS is 13.4 months . Similarly, women with absent/low levels of BRCA1 expression receiving platinum/taxane regimens had improved median OS (61.4 vs. 43.2 months) and PFS (23.2 vs. 18.2 months) when compared to other women with overexpression of BRCA1 . This demonstrates that low BRCA1 expression can be used as a positive prognostic factor in ovarian cancer and to predict an enhanced response to platinum chemotherapy. 3.18. Programmed Cell Death Ligands (PD-L) An immunoinhibitory receptor belonging to CD28/cytotoxic T lymphocyte antigen . In a recent study looking at the PD-L1 and PD-L2 expression on ovarian tissues, it was reported that among 70 tissue samples obtained during primary surgery, the proportion of high expression (>20%) was 68.6% for PD-L1 and 37.1% for PD-L2 . Women with high PD-L1 expression have shown a poor 5-year survival rate than those with low PD-L1 expression . This suggests that the higher expression of PD-L1 on tumour cells leads to impaired antitumor immunity . In another investigation, it was shown that PD-L1 on tumour cells directly suppresses antitumor CD8 + T cells . The PD-L1 expression in ovarian tissues can be a marker for a poorer prognosis. 3.19. Forkhead Box Transcription Factor (FOXP3) It is involved in the regulation and function of the immune system. FOXP3 plays a crucial role in the generation of immunosuppressive CD4 + CD25 + regulatory T cells (Tregs), which induce immune tolerance to antigens . The presence of intraepithelial FOXP3+ cells was associated with increased disease-specific survival ( p = 0.010) . Moreover, ovarian cancers that were triply positive for intraepithelial CD4 + , FOXP3 + , and CD25 + cells showed a trend towards increased survival ( p = 0.059) . On the other hand, the quantitation of the FOXP3 expression in the patient subgroup (>81th percentile), is significantly associated with worse prognosis in terms of OS (27.8 versus 77.3 months, p = 0.0034) and progression-free survival (18 versus 57.5 months; p = 0.0041) when compared to those with less FOXP3 expression in women ovarian tissues . High-expression levels of FOXP3 might represent a surrogate marker for an immunosuppressive milieu contributing to tumour immune escape . 3.20. Tumour Necrosis Factor Receptor 2 (TNFR2) It has been found that a strong expression of TNFR2 on regulatory T cells (Tregs) in ovarian cancer tissue creates a potent immunosuppressive tumour microenvironment and is associated with poor clinical response . A study looking at TNFR2 immunostaining intensity in ovarian tissues of 126 patients with ovarian cancer, found that the immunostaining intensity correlated with tumour stage ( p < 0.001), stages I–II (30%) compared to stages III–IV (66%) . Women with positive immunostaining to TNFR2 were found to have a significantly shorter mean survival time ( p = 0.002). This study concluded that the tissue expression of TNFR2 in epithelial ovarian cancer correlated with the highest risk of cancer progression . In another recent retrospective study, strong expression of TNFR2 was seen in the ovarian tissue of patients with chemo-naïve advanced ovarian cancers . In this study, both the platinum-sensitive group (71.4%) and platinum resistance groups (81.8%), and the difference in the TNFR2 expression between the two groups were not statistically significant . The PFS trend was longer in the weaker protein expression (5–50%) of TNFR2 compared to the stronger expression group (31 vs. 18 months) but this was not statistically significant. Women with TNFR2 over-expression had a longer median PFS interval of 540 days in the platinum-sensitive group, and a shorter interval of 90 days in the platinum-resistance group, p = 0.0001 . The limitation of this study is its small sample size. The TNFR2 marker in IHC has the potential to be used as a positive indicator towards chemotherapy treatment and a larger prospective study may help to confirm its role. 3.21. Signal Transducer and Activator of Transcription 3 (STAT3) The JAK/STAT3 activation pathway is thought to be crucial for a number of oncogenic activities, including tumour growth, differentiation, angiogenesis, and survival . According to a number of studies, total STAT3 and phosphorylated STAT3 (p-STAT3) are overexpressed in a subgroup of chemotherapy-resistant ovarian cancer cell lines compared to their expression in the corresponding chemotherapy-sensitive cell lines . It was shown that inhibiting STAT3 signalling could possibly abolish cisplatin resistance in ovarian cancer patients receiving chemotherapy because STAT3 is constantly active in cisplatin-resistant ovarian tumours. A current systematic review and meta-analysis on 16 eligible studies involving 1747 ovarian cancer patients found that STAT3/p-STAT3 expression was upregulated in ovarian cancer samples compared to normal ovarian tissue, benign tumour, and borderline tumours (OR = 10.14, p < 0.00001; OR = 9.08, p < 0.00001; OR = 4.01, p < 0.00001) . The STAT3/p-STAT3 overexpression also correlated with FIGO stages (I–II vs. III–IV) (OR = 0.36, p < 0.00001), tumour grades (G1 + G2 vs G3) (OR = 0.55; p = 0.001) and presence of lymph node metastasis (OR = 3.39; p < 0.00001). High STAT3/p-STAT3 expression was associated with shorter OS (HR = 1.67, p < 0.00001) and progression-free survival (PFS) (HR = 1.40, p = 0.007) . This meta-analysis concluded that STAT3/p-STAT3 over-expression likely indicates a poor prognosis in ovarian cancer patients. Nevertheless, prospective studies are needed to confirm these associations. A recent retrospective study found that the over-expression of STAT3 was seen in the chemo-naïve ovarian tissue of patients with advanced ovarian cancer (19/25, 76.0%) and in both platinum-sensitive (78.6%) and platinum-resistant (72.7%) groups. The PFS was longer trend in the weaker (5–50%) protein expression of STAT3 compared to the over-expressed (>50%) group (34 vs. 18 months) although this was not statistically significant due to the small sample size. Patients with STAT3 over-expression displayed a longer PFS of 120 days in the platinum-resistant group and a better PFS of 660 days in the platinum-sensitive group ( p = 0.0001) . To verify the results, a study with a bigger sample size will be needed. It is a signal protein that many cells express that encourages the growth of blood vessels. By regulating tumour development through its support of tumour angiogenesis and ascites formation through its stimulation of vascular permeability, VEGF plays a significant role during ovarian cancer . High levels of VEGF are linked with primary resistance to platinum-based chemotherapy, and the immunohistochemical level of VEGF expression is highly associated with platinum sensitivity and overall patient survival . In another study, the author demonstrated that patients with platinum resistance had a higher proportion of VEGF levels compared to those in the platinum-sensitive group (86% vs. 2%) . The median survival in the patient group with a high VEGF score was determined to be 11 months, compared to 32 months in the group with a low VEGF score, by the author . Consequently, VEGF expression was inversely connected with overall survival (OS) and strongly correlated with platinum resistance EOC ( p < 0.0001) . When compared to the chemo-resistant patient group, the VEGF levels in the chemo-sensitive patients were significantly lower . On the other hand, it was examined that the VEGF score in a multivariate regression model and found that the presence of VEGF in the tumour was a significant predictor of how well patients would respond to platinum-based chemotherapy. However, neither the stage nor the grade of the tumour, nor the patient’s age, were shown to be related to VEGF expression . Often known as Prominin-1, the PROM1 gene on chromosome 5 encodes a five-transmembrane glycoprotein with a molecular weight of 97 kDa. CD133 is a putative marker for cancer stem cells in ovarian cancer, which have been known to predict resistance to chemotherapy . According to earlier research, CD133-positive cells were more aggressive and tumorigenic in vitro and/or in vivo than their CD133-negative progeny and were more resistant to chemotherapeutic therapy . Similarly, 31% of 400 ovarian cancer samples had CD133 expression, which, according to a log-rank test, was linked to both shorter OS times and shorter disease-free survival times ( p = 0.007 and P0.001, respectively) . The hypothesis that CD133 and cancer stem cells are related is supported by the finding that CD133 expression is a predictor of poor clinical outcomes for ovarian cancer patients . It is a tumour-suppressor gene located on the short arm of chromosome 17 , which regulates cell growth. One of the most often found genetic anomalies in human neoplasia is p53 gene mutations . Numerous studies have examined p53 immunoreactivity in ovarian carcinomas, and the majority of these have found a significant percentage of positivity, particularly in serous tumours . On the other hand, a recent investigation discovered functional p53 mutations in 50.8% of high-grade serous ovarian cancer (HGSOC) and 8.3% of low-grade ovarian cancer . In addition, p53 was much more expressed in HGSOC than in low-grade patients ( p = 0.005); whereas, only 18% (4 of 22) of low-grade cases showed 5+ staining and 64% (30 of 47) of HGSOC cases . The time to progression and OS were considerably reduced in patients with p53 mutations compared to those with normal wild-type p53; however, since p53 mutations were discovered in 56 percent (99 of 178) of the epithelial ovarian tumours ( p = 0.029 and p = 0.014). In 62 percent (110 of 178) of ovarian cancer tissues, p53 protein overexpression (>10 percent positively stained nuclei) was discovered . In cases with p53 overexpression, the time to progression and OS were shorter (cut-point, 10%: p = 0.071 and p = 0.056) . Individuals with p53 overexpression experienced resistance to adjuvant cisplatin or carboplatin treatment substantially more frequently ( p = 0.001) than patients with normal p53 . The effective induction of apoptosis by a functional p53 protein determines the sensitivity of tumour cells to various chemotherapeutic agents, and p53 loss can increase chemotherapy resistance ; however, opinions on the correlation between chemotherapy sensitivity and p53 status are still divided. The Ki-67 gene (10q25) is located on the long arm of human chromosome 10 . The quantification of the reactive expression of Ki-67 antigen using an immunohistochemical tool has been demonstrated to provide an estimate of the tumour’s proliferative capacity and, therefore, has been widely utilized as a reliable prognostic marker in almost all types of cancers including those of the lymphatic system, lung, brain, breast, cervix, uterus, ovary, and soft tissue sarcoma . The key feature of this biomarker is its expression is absent in the quiescent state of cells (G0) and expressed in all of the active cell cycle phases (G1, S, G2, and mitosis) in proliferating tissues . The strongest areas of Ki-67 immunostaining are assessed, and all discernible nuclear staining—regardless of intensity—is considered positive immunoreactivity. Normally, immunostaining is restricted to the nucleus, and only mitosis is associated with cytoplasmic positivity. Using a high-power microscope objective, the percentage of positively stained cells (x400) expressed is defined as the Ki-67 labelling index (Ki-67 LI). Numerous malignancies, including ovarian cancer, have been demonstrated to have a poor prognosis with high Ki-67 LI . Mindbomb E3 ubiquitin protein ligase1 (MIB-1 antibody) Immunohistochemistry (IHC) is a relatively new technique for determining the Proliferative Index (PI) of a neoplastic lesion . The monoclonal mouse antibody MIB-1 is the standard for demonstrating PI in formalin-fixed, paraffin-embedded specimens for the Ki-67 antigen. It reacts with the nuclear protein Ki-67 antigen, which comes in two isoforms with molecular masses of 345 and 395 kDa . Numerous studies have linked the presence of this IHC marker in EOC with other prognostic indicators such as histologic subtype, tumour grade, FIGO stage, treatment response, as well as with survival rates . In addition, the mean MIB-1 index in HGSOC was found to be 55.4% compared with 23.0% in low-grade ovarian serous cancer . Similarly, another study on the MIB-1 proliferative index (an independent predictor of lymph node metastasis) has shown a significantly lower value in low-grade ovarian cancer (16.3%) when compared with HGSOC (47.8%) . Elevated Ki-67 LI was linked to high-grade tumours (69.9%), high-grade serous tumours (65.34%), and advanced FIGO staging (70.6%) in a study involving 202 women . However, Ki-67 LI and CA 125 levels did not significantly correlate. The growth fraction of a tumour cell population can be determined with outstanding cost-effectiveness using the marker Ki-67. When paired with Ki-67 LI in the histopathology report, the histological grade and FIGO stage of EOC can aid in the diagnosis of subtype distinction, prognostication, determining whether adjuvant treatment is necessary, and survival analysis . In a study involving seventy-three patients with EOC , Ki-67 expression in combination with survivin and Topoisomerase IIα was evaluated by immunohistochemistry on formalin-fixed, paraffin-embedded tissue sections in relation to the response to chemotherapy. Nuclear staining for all antibodies was scored in a three-layer system and staining >10% was accepted as expression. It was found that Ki-67 was related to poor OS ( p = 0.005); however, there was no association between Ki-67 expression and histological subtype, stage or grade of ovarian cancer . It has been well established that Ki-67 immunostaining of human tumours has a diagnostic and prognostic significance. The Ki-67 immunostaining has been carried out on a variety of histological and cytological specimens, including frozen sections, smears, cell suspensions, etc. However, the limitation of this epitope is that it does not survive conventional histopathological fixation, such as in formaldehyde or alcohol. The regular inclusion of the Ki-67 LI/MIB-1 IHC marker as a diagnostic and prognostic feature in histopathology reports would pave the way for a better understanding of biological behaviour and the modification of treatment plans. The Ki-67 antigen/MIB-1 antibody reactive staining, therefore, can be employed as a diagnostic and predictive tool to direct the clinical care of ovarian cancer. Recent advances in methodological refinement have made Ki-67 antigen immunostaining a promising goal for PI . It is essential to the operation of the Spindle Assembly Checkpoint (SAC), which mediates the attachment of spindle microtubules to kinetochores on chromosomes and the separation of chromosomes during mitosis. The expression was substantially higher in those with no recurrence compared to those with recurrence (24 vs. 17, p = 0.023) . When the 41 cases were divided into low- and high-expression groups, the progression-free survival did not differ substantially between the two groups ( p = 0.0685), but the low-expression group had a shorter OS compared to the high-expression group ( p = 0.0188) . This suggests that MAD2 expression levels can predict susceptibility to anticancer medications and the likelihood of recurrence. In addition, in a study of the immunohistochemical score of MAD2 protein was negatively correlated with progression-free survival of women with HGSOC ( p = 0.0003), with a hazard ratio of 4.69. The reduced expression of MAD2 protein indicates a defective mitotic checkpoint, potentiates resistance to ovarian cancer cells that might eventually lead to a recurrence of the disease . The paclitaxel-induced activation of mitotic cell death is the function of MAD2; however, when MAD2 is downregulated, the cellular response to paclitaxel is diminished. It is a central key protein that mediates the response to genotoxic stress . A positive response to platinum-based chemotherapy is associated with high Chk2 expression. In a study of immunohistochemistry involving 125 women with advanced stage HGSOC having a residual disease of less than 2 cm of cancer after surgery, it was shown that high expression of Chk2 was related to good response to platinum-based chemotherapy (OR = 0.132, p = 0.014) compared to those with low Chk2 expression in their pre-treated ovarian cancer tissues . Similarly in a different study, Chk2 depletion reduced the platinum sensitivity of ovarian cancer cell lines, indicating that Chk2 should not be used as a therapeutic target in HGSOC patients because it eliminated the cisplatin-induced S-phase cell cycle arrest and increased long-term survival resistance to cisplatin . It is a tyrosine kinase commonly found to be overexpressed in ovarian cancer women . It is crucial for cell growth, differentiation, and death, and it may contribute to the development of cancer . Numerous epidemiological studies looked at the connection between circulating IGF-1 levels and the risk of ovarian cancer, but no correlation was identified. An investigation of the IGF-1R levels’ prognostic significance in a small cohort of 19 women with HGSOC found a significant rise in the expression of the IGF-1R transcript following six cycles of neoadjuvant chemotherapy (NACT) compared to chemo-naïve tumour tissues. The findings unveiled that the women with higher IGF-1R expression had prolonged disease-free survival (DFS: 26.7 months) compared to the ones with lower IGF-1R expression (DFS: 11.9 months). Increased plasma IGF-I levels were also more frequently found in well-differentiated epithelial ovarian carcinoma ( p = 0.0047) . Prostaglandins are lipid-based arachidonic acid derivatives that regulate follicle-stimulating hormone (FSH)-mediated proliferation, differentiation, and steroidogenic activity in the normal ovary . IHC evaluation of PGD2 is an independent marker of good prognosis in HGSOC. In a study of 114 HGSOC patients the IHC analysis revealed that a high expression of PGD2 correlated with improved disease-free survival ( p = 0.009), the lack of relapse ( p = 0.039), and platinum-based therapy sensitivity ( p = 0.016). Therefore, this study concluded that the presence of PGD2 on ovarian tissue predicted a low risk of relapse when analyzed using multiple cox regression (hazard ratio, 0.37; p = 0.002), and therefore, was a good prognostic factor for women with HGSOC . It is a protein critical in a nucleotide excision repair pathway. It was shown that NACT-treated HGSOC tissues showed a two-fold increase in ERCC1 expression compared to chemo-naïve HGSOC tissues ( p < 0.0001) . The neoadjuvant group with high ERCC1 had a mean overall survival of 141.6 months, which was noticeably longer than the ERCC1-absence group’s survival of 61 months ( p = 0.028) . A relationship between ERCC1 expression and tumour-infiltrating lymphocytes (TILs) is also suggested by this author, but further research is needed to confirm this. ERCC1, hence, can act as a potential biomarker that can predict platinum response and OS in ovarian cancer women undergoing NACT . It is a bona fide oncogene, altered in approximately 20% of HGSOC women. It has a definite role in both the acquirement of chemoresistance and disease progression. Several studies have found Notch 3 as a significant prognostic factor in women with relapsed tumours . In a study included 25 women with HGSOC have been followed up for a duration of 32 months. Out of all cases, nine (36%) clinically displayed tumour recurrence with acquired chemoresistance to first-line chemotherapy agents consisting of cisplatin and paclitaxel. The total cohort were divided into two: the higher-expressing group (n = 12) and the lower-expressing group (n = 13). The higher expression of Notch 3 (>2 fold) was significantly associated with chemo-resistant serous carcinomas compared to the low-expressing group (58.3% vs. 15.4%), suggestive of the role of Notch 3 as a possibly valuable predictive marker for chemoresistance . In addition, it was highlighted that the possible association of Notch 3 and stage III/IV of ovarian adenocarcinoma with respect to poorer progression-free survival where 3 out of 5 women with a relapse (within 6 months post-first-line chemotherapy) had Notch 3 overexpression prior to chemotherapy treatment . It is a heparan sulphate proteoglycan on the cell surface that attaches to the cell membrane through glycosylphosphatidylinositol anchors . Its product is thought to interact with a variety of morphogenic or growth factors to control cellular development and apoptosis . In a study of 213 cases with different subtypes of EOC, the GPC3 positive expression was demonstrated mainly (44%) in ovarian clear cell carcinoma (OCCC), and less in other subtypes including mucinous (4%), endometrioid (5%), and HGSOC (11%) . Although GPC3 expression was significantly associated with poor overall survival in advanced (stage III/IV) OCCC, there was a negative correlation between GPC3 expression and clinicopathological aspects, such as tumour stage, lymph node spread peritoneal metastasis, and death rate . Hence, GPC3 may be a potential marker for the advanced stages of OCCC . Is a catalase that oxidizes aldehyde-containing molecules and ALDH; therefore, it plays an important role in cellular homeostasis. In cancer cells, it assists in both energy production via retinoic acid (RA) synthesis, and in the deactivation of drug molecules (by action on the aldehyde group) . A growing body of research indicates that ALDH may not only be utilised as a marker for stem cells but may also control cellular processes such as self-renewal, growth, differentiation, and radiation and drug resistance. According to recent research, both healthy and cancer cells with high levels of ALDH1 activity have the capacity to serve as stem cells, as well as the capacity for self-renewal and stress resistance . Many studies have shown that high expression of ALDH1 is found to be associated with increased capacity for sphere formation, tumorigenicity, and invasiveness . There is strong evidence to support the idea that the specific isoform(s) of ALDH expressed in tissue determines its function as a stem cell marker. The cytosolic enzyme needed for RA production belongs to the ALDH1 family, and ALDH1A1 have received a lot of attention . Both normal stem cells and stem-like cells that initiate tumours have their biological functions controlled by ALDH1A1, which encourages tumour growth and chemotherapy resistance . It was reported that ovarian cancer can also be prognosticated poorly based on high ALDH1A1 expression in the immunohistochemistry of the ovarian tissue. Higher ALDH1 expression levels in ovary cancer cases were found to be associated with a worse prognosis in both serous ( p = 0.006) and clear cell adenocarcinoma ( p = 0.047) cases, according to immunohistochemical staining of a total of 123 EOC tissues . ALDH1, hence, is a marker for ovarian cancer stem and the degree of ALDH1 expression may be a potential diagnostic for predicting a bad prognosis. HOXA10 is a homeobox allotype gene in HOX family. In a single-center study conducted at Fudan Hospital, China, 29 women were evaluated for HOXA10 expression and correlation with survival . According to Kaplan-Meier analysis, HOXA10 expression was negatively correlated with the 5-year survival rate, which was only 30% in the 20 women with positive HOXA10 expression and 55.6 percent in the nine women with absent HOXA10 expression . This suggests that the Human homeobox gene A10 may be used as a prognostic factor in ovarian cancer and that HOXA10 could be a therapeutic target for this type of cancer. BAF250a, the protein encoded by ARID1A, is one of the accessory subunits of the SWI–SNF complex chromatin remodelling complex which modulates the repression/de-repression of several promoters, and it acts as a tumour suppressor by nature . Immunohistochemical analysis of 53 ovarian clear cell carcinoma (OCCC) patient samples was the first report demonstrating that low levels of ARID1A protein can serve as a marker of poor outcome in OCCC patients, whereby out of the 53 patients, eight with low ARID1A expression had shorter progression-free survival than those with high expression ( p = 0.044, log-rank test) . Furthermore, platinum-based chemotherapy significantly decreased OS ( p = 0.03) and progression-free survival ( p = 0.01) for nine patients with loss of ARID1A expression compared to those with positive ARID1A expression in a cohort of 60 patients with epithelial ovarian cancer diagnosed at stages I to IV . This demonstrates that low levels of ARID1B protein can serve as a marker of poor outcomes in patients with ovarian cancer. HNF-1β is a homeodomain-containing transcription factor which binds to the same DNA sequence as homodimers or heterodimers. Numerous genes are known to be regulated by it, either directly or indirectly . The majority of ovarian cancer clear cell carcinoma (OCCC) over-express HNF-1β; therefore, HNF1β over-expression is likely to be helpful for the diagnosis of OCCC. The presence of HNF1-binding sites at numerous OCCC-specific hypomethylated genes further supports this notion. In addition, a motif analysis found that HNF1 binding motifs are significantly enriched in genes that comprise the OCCC signature . In two separate large cohort studies, one on an Australian population known as Australian ovarian cancer women (AOCS) and in another study on a Japanese population known as high-volume Japanese university clinical network (JIKEI), the over-expression of HNF-1β was found to be in OCCC subtypes and have been associated with significantly longer PFS ( p = 0.01) and OS ( p = 0.02) . It was demonstrated that HNF-1β increases OCCC cell survival by enhancing Reactive Oxygen Species (ROS) resistance. Hence, HNF1β inhibition with some type of inhibitor, such as the microRNA mir-802, may yield a therapeutic effect by annulling ROS resistance . The HNF1β-induced cell survival according to the author was shown to be glucose-dependent . Therefore, glucose metabolism may be a therapeutic target in OCCC with high HNF-1β expression. Cyclooxygenase plays important roles in catalysing rate-limiting reactions for prostaglandin and thromboxane synthesis and is often dysregulated in neoplastic tissues. COX-1 and COX-2 were often expressed in every type of epithelial ovarian cancer, suggesting that each may contribute to cancer development or progression . The COX-2 expression was analysed by IHC in 87 women with ovarian cancer and Ferrandina et al., found that the percentage of positive COX-2 expression was significantly higher in non-responders than in patients responding to treatment ( p = 0.043 and p = 0.0018, respectively) . This is the first study to show a link between COX-2 and decreased chemotherapy susceptibility and poor outcomes in a large cohort of patients with primary advanced ovarian cancer who had the detectable disease at initial surgery . Another study by Li et al., found that COX-1 protein was found over-expressed in 69.3% of the total 137 ovarian cancers . COX-2 was present in 70.8% of all epithelial cancers’ subtypes, with 63.9% of the primary cancers and 81.5% of the metastatic cancers, positive for COX-2 . The immunostaining for COX-2 was frequently found at the advancing margin of tumour invasion or in new metastatic, whereas the COX-1 protein overexpression was observed in ovarian surface epithelial cells, especially that of inclusion cysts . Assessment of COX-2 status may provide additional information to identify patients with ovarian cancer who have a low chance of responding to chemotherapy and are potential candidates for personalized treatments. BRCA1 is reported to be downregulated in 15–72% of EOC cases . Several retrospective studies report that BRCA1-mutated EOC women have a survival advantage attributed to the enhanced response to platinum chemotherapy . In a study of 292 women with ovarian cancer, it was observed that 120 of the cases had ≤10% of the IHC nuclear staining of BRCA1 and 59% of women expressed >10% of BRCA1 staining which was classified as overexpression of BRCA1 . Women with low BRAC1 expression had improved median OS (41.5 months) and PFS (16.3 months) compared to women with overexpression of BRCA1 in ovarian tumour tissues, where the median OS is 28.7 months and PFS is 13.4 months . Similarly, women with absent/low levels of BRCA1 expression receiving platinum/taxane regimens had improved median OS (61.4 vs. 43.2 months) and PFS (23.2 vs. 18.2 months) when compared to other women with overexpression of BRCA1 . This demonstrates that low BRCA1 expression can be used as a positive prognostic factor in ovarian cancer and to predict an enhanced response to platinum chemotherapy. An immunoinhibitory receptor belonging to CD28/cytotoxic T lymphocyte antigen . In a recent study looking at the PD-L1 and PD-L2 expression on ovarian tissues, it was reported that among 70 tissue samples obtained during primary surgery, the proportion of high expression (>20%) was 68.6% for PD-L1 and 37.1% for PD-L2 . Women with high PD-L1 expression have shown a poor 5-year survival rate than those with low PD-L1 expression . This suggests that the higher expression of PD-L1 on tumour cells leads to impaired antitumor immunity . In another investigation, it was shown that PD-L1 on tumour cells directly suppresses antitumor CD8 + T cells . The PD-L1 expression in ovarian tissues can be a marker for a poorer prognosis. It is involved in the regulation and function of the immune system. FOXP3 plays a crucial role in the generation of immunosuppressive CD4 + CD25 + regulatory T cells (Tregs), which induce immune tolerance to antigens . The presence of intraepithelial FOXP3+ cells was associated with increased disease-specific survival ( p = 0.010) . Moreover, ovarian cancers that were triply positive for intraepithelial CD4 + , FOXP3 + , and CD25 + cells showed a trend towards increased survival ( p = 0.059) . On the other hand, the quantitation of the FOXP3 expression in the patient subgroup (>81th percentile), is significantly associated with worse prognosis in terms of OS (27.8 versus 77.3 months, p = 0.0034) and progression-free survival (18 versus 57.5 months; p = 0.0041) when compared to those with less FOXP3 expression in women ovarian tissues . High-expression levels of FOXP3 might represent a surrogate marker for an immunosuppressive milieu contributing to tumour immune escape . It has been found that a strong expression of TNFR2 on regulatory T cells (Tregs) in ovarian cancer tissue creates a potent immunosuppressive tumour microenvironment and is associated with poor clinical response . A study looking at TNFR2 immunostaining intensity in ovarian tissues of 126 patients with ovarian cancer, found that the immunostaining intensity correlated with tumour stage ( p < 0.001), stages I–II (30%) compared to stages III–IV (66%) . Women with positive immunostaining to TNFR2 were found to have a significantly shorter mean survival time ( p = 0.002). This study concluded that the tissue expression of TNFR2 in epithelial ovarian cancer correlated with the highest risk of cancer progression . In another recent retrospective study, strong expression of TNFR2 was seen in the ovarian tissue of patients with chemo-naïve advanced ovarian cancers . In this study, both the platinum-sensitive group (71.4%) and platinum resistance groups (81.8%), and the difference in the TNFR2 expression between the two groups were not statistically significant . The PFS trend was longer in the weaker protein expression (5–50%) of TNFR2 compared to the stronger expression group (31 vs. 18 months) but this was not statistically significant. Women with TNFR2 over-expression had a longer median PFS interval of 540 days in the platinum-sensitive group, and a shorter interval of 90 days in the platinum-resistance group, p = 0.0001 . The limitation of this study is its small sample size. The TNFR2 marker in IHC has the potential to be used as a positive indicator towards chemotherapy treatment and a larger prospective study may help to confirm its role. The JAK/STAT3 activation pathway is thought to be crucial for a number of oncogenic activities, including tumour growth, differentiation, angiogenesis, and survival . According to a number of studies, total STAT3 and phosphorylated STAT3 (p-STAT3) are overexpressed in a subgroup of chemotherapy-resistant ovarian cancer cell lines compared to their expression in the corresponding chemotherapy-sensitive cell lines . It was shown that inhibiting STAT3 signalling could possibly abolish cisplatin resistance in ovarian cancer patients receiving chemotherapy because STAT3 is constantly active in cisplatin-resistant ovarian tumours. A current systematic review and meta-analysis on 16 eligible studies involving 1747 ovarian cancer patients found that STAT3/p-STAT3 expression was upregulated in ovarian cancer samples compared to normal ovarian tissue, benign tumour, and borderline tumours (OR = 10.14, p < 0.00001; OR = 9.08, p < 0.00001; OR = 4.01, p < 0.00001) . The STAT3/p-STAT3 overexpression also correlated with FIGO stages (I–II vs. III–IV) (OR = 0.36, p < 0.00001), tumour grades (G1 + G2 vs G3) (OR = 0.55; p = 0.001) and presence of lymph node metastasis (OR = 3.39; p < 0.00001). High STAT3/p-STAT3 expression was associated with shorter OS (HR = 1.67, p < 0.00001) and progression-free survival (PFS) (HR = 1.40, p = 0.007) . This meta-analysis concluded that STAT3/p-STAT3 over-expression likely indicates a poor prognosis in ovarian cancer patients. Nevertheless, prospective studies are needed to confirm these associations. A recent retrospective study found that the over-expression of STAT3 was seen in the chemo-naïve ovarian tissue of patients with advanced ovarian cancer (19/25, 76.0%) and in both platinum-sensitive (78.6%) and platinum-resistant (72.7%) groups. The PFS was longer trend in the weaker (5–50%) protein expression of STAT3 compared to the over-expressed (>50%) group (34 vs. 18 months) although this was not statistically significant due to the small sample size. Patients with STAT3 over-expression displayed a longer PFS of 120 days in the platinum-resistant group and a better PFS of 660 days in the platinum-sensitive group ( p = 0.0001) . To verify the results, a study with a bigger sample size will be needed. Despite the ubiquitous presence of IHC in research and diagnostic procedures, there are several limitations; Most notably, the lack of strict guidelines for staining often leads to conflicting results between different facilities using diverse procedures and various antibodies. In addition to consistency in antibody concentrations, many other components of IHC lack quality control. For example, whether an antibody binds to its target with appropriate sensitivity and specificity is not routinely tested. The lack of quality control procedures outside of the antibody itself may further contribute to the unreliability of staining. The sensitivity and specificity of the employed antibodies can be impacted by variations in tissue absorption time, slide thickness, and antigen retrieval. Therefore, to enable the widespread application of IHC-based biomarkers, thorough, standardised techniques are required. In addition, the optimisation of IHC is particularly important for newly discovered molecules or new antibodies. The specificity and sensitivity of the IHC need to be validated. IHC platforms can be utilised in a clinical setting for disease diagnosis, early disease prognosis prediction, and early therapy response prediction. An automated image analysis platform is advised to be verified and deployed to reduce observer variability and be more reliable when quantifying biomarkers in patient samples to increase the reliability and reproducibility of IHC. However, because they rely heavily on user input to speed up the machine-learning process, these platforms are still only partially automated. Traditional IHC has been used to identify and study the biomarkers discussed in this review, but new technologies enable more in-depth analyses of molecular markers. For instance, technologies such as Vectra or AQUA that enable the inclusion of immunofluorescence enable the simultaneous investigation of many cellular phenotypes. In our review, we found that the Ki-67 antigen/MIB-1 antibody immunostaining can be employed as a diagnostic and predictive tool to direct the clinical care of ovarian cancer. Glypican-3, ALDH1A1, TNFR2, STAT3, FOXP3, and TIM3 are increasingly recognised biomarkers to predict chemoresistance in women with ovarian cancer. In addition, HOXA10, HNF-1β, and ARID1A can be putative biomarkers with the potential to prognosticate the response to therapy. It was also shown that Chk2, PGD2, and NOTCH 3 are promising biomarkers for the prediction of chemoresistance in HGSOC women. On the other hand, MAD2, IGF1R, PDL-1, PDL-2, and ERCC1 are still under investigation, whereby clinical cohort studies with larger sample sizes and appropriate end-points are warranted to validate their potential. Evaluations of newer biomarkers are much needed to predict/evaluate the quality of life and overall survival in women with different subtypes of EOC. Finally, although biomarkers can act as a stand-alone indicator, a single biomarker is often not sufficient to stratify women unequivocally and safely. Given that numerous biomarkers have been discovered using these methods, the integration of IHC with genomic and transcriptional techniques may help in the more accurate and predictive identification of biomarkers. IHC-based methods can therefore be reliable on their own, but by combining them with other tests or expanding them further to be more composite and quantitative in clinical research, we can hasten the development and confirmation of currently available biomarkers. Continuous understanding of these molecular mechanisms has the potential to pave the way for the creation of pharmaceutical treatments for cancer that are more precisely targeted. Ovarian cancer is the most lethal gynaecological cancer; however, efforts are being made to improve the clinical outcome. IHC has become an essential tool for pathologists to elucidate the pathophysiology of the disease, both in routine practice and in research. IHC is also an important tool for validating biomarkers, which would eventually help in deciding the best treatment. In this review, we attempted to provide a thorough search for the most effective IHC biomarkers associated with chemotherapy response and disease relapse as well as a brief highlight on the principles and practical tips for employing various biomarkers as a diagnostic and predictive tool for ovarian cancer women in tissue histopathology. In addition, this review is intended to be a useful platform for a future clinical study to implement post-operative biomarkers as a clinical prognostic tool and to customize cancer treatments.
A randomized controlled educational study to evaluate an e-learning module to teach the physical examination of the temporomandibular joint in juvenile idiopathic arthritis
9d94b851-a670-4436-91ae-66b47156ef25
11490130
Internal Medicine[mh]
Arthritis of the temporomandibular joint (TMJ) occurs in up to 53% of patients with juvenile idiopathic arthritis (JIA), frequently leading to dentofacial deformity and dysfunction . The TMJ has been termed the “forgotten joint” of JIA and may present as the first or only joint involved . Furthermore, TMJ arthritis is often initially asymptomatic leading to diagnostic delays . The effects of TMJ arthritis can have a severe negative impact on health-related quality of life that may persist into adulthood . In a cross-sectional study of adults with a history of TMJ arthritis and JIA, 62% experienced TMJ pain, 43% had functional limitation, and 76% had facial asymmetry . This occurs despite the introduction of biologics in the treatment of JIA . Early detection and treatment are believed to reduce joint damage and minimize resulting facial deformity. Gadolinium-enhanced magnetic resonance imaging (MRI) is currently the diagnostic standard to diagnose TMJ arthritis . The clinical orofacial examination is useful to detect signs of active TMJ arthritis, to assess response to therapy, and to detect abnormalities caused by prior TMJ arthritis . However, differences in how clinical orofacial exams are conducted has led to discrepancies among research studies and potential under-recognition of this clinical problem. Many specialists are involved in treating these patients, including rheumatologists, maxillofacial surgeons, orthodontists, and orofacial pain specialists, among whom inconsistent orofacial examinations are performed. While TMJ arthritis is a well-recognized problem in JIA, pediatric rheumatology training programs do not routinely teach a standardized TMJ examination technique. Trainees often lack exposure to teachers or multidisciplinary clinics with expertise in this disorder. Pediatric rheumatologists, however, often serve as the main care providers for susceptible patients and are best positioned to recognize early TMJ arthritis. Screening practices for TMJ arthritis vary among both adult and pediatric rheumatologists . A pediatric rheumatology fellowship curriculum to improve and standardize the TMJ examination technique could improve recognition and treatment of one of the most debilitating outcomes in JIA. The Temporomandibular Joint Juvenile Arthritis Work Group (TMJaw), a multi-disciplinary group of international experts, established consensus-based recommendations for the clinical orofacial examination in JIA . In 2019, this group published a standardized, short (less than 3 min) physical examination protocol to be used routinely in the clinical setting . However, this tool is not being used broadly for teaching JIA examination techniques to trainees, in part because the print description of the techniques is long, and technical. In this study, we developed and evaluated an e-learning module based on the TMJaw standardized protocol with the goal of teaching the TMJ examination protocol to pediatric rheumatology trainees, so that it can be incorporated into their routine musculoskeletal physical examination for use in all patients with JIA. E-learning module A 25-minute e-learning module consisting of instructive video clips and interactive questions was developed utilizing the input of both TMJ content experts and a medical education e-learning team. The module includes background information about TMJ arthritis (2 min) and uses a video narrator to review the elements of a structured physical exam: history (1 min), palpation (2 min), assessment of deviation (5 min), inspection for symmetry and profile changes (6 min), and measurements of range of motion (6 min). The module provides a mnemonic for the steps of the physical exam: STOMPS: S = symptoms (history); T = TMJ palpation; O = opening deviation; M = maximal unassisted mouth opening; P = profile; S = symmetry. Throughout the module there are 8 interactive components, spaced between video demonstrations. These include multiple choice questions, simulated measurements and calculations, and identification of anatomic landmarks. At the conclusion of the module, the entire TMJ exam is demonstrated, beginning to end, in less than 3 min. Consent was obtained to use demonstrative patient photographs. The module used standardized operational terminology and definitions for patients with JIA . The platform housing the module displayed the most recent logon but did not capture how long each participant spent viewing the module nor how many times it was reviewed. During the study, the module was embargoed; following the conclusion of this study, the module became freely available and can be accessed by creating an account . Pre- and post-tests A written assessment was developed to evaluate knowledge acquisition among fellows following the e-learning intervention. A convenience sample of pediatrics residents and oral and maxillofacial surgery (OMS) trainees participated in beta testing to assess baseline knowledge and ensure that our assessment tool discriminated between content-naïve pediatrics residents and mastery-level OMS trainees. Questions that did not discriminate well between pediatric resident knowledge (which was assumed to be minimal) and OMS residents (which was assumed to be adequate) were modified until they performed well as a tool for discriminating between groups of trainees. The final test consisted of 10-items, weighted by content, with a maximum possible score of 7; the pre-test was identical to the post-test. Power calculation was based on an alpha of 0.05 and a beta of 0.80. Sigma was calculated based on the difference between the mean performances on pre-test validation of 27 pediatrics residents (4.85, SD 1.1) and 9 OMS residents (6.44, SD 1.24). This suggested a sample size of 9 per group would adequately power the study for a non-inferiority knowledge assessment. Study participants Pediatric rheumatology fellows were recruited by email to participate in a study to assess the TMJ module. Fellows were offered a $50 gift card for their participation. Twenty-two fellows were recruited with 11 in each group. Block randomization was performed, and fellows were stratified by post-graduate year, with an equal number of first year fellows and more senior fellows in each group. All participants completed a written pre-test before the learning intervention. Two weeks later, they were given access to the learning materials.The control group received the published article that describes the TMJaw physical examination protocol , while the e-learning module intervention group received both the article and access to the e-learning module. Fellows received their learning intervention two weeks prior to their in-person objective structured clinical examination (OSCE) and written post-test evaluation. The learning intervention could be accessed multiple times at any point from its release two weeks before the OSCE until the day of the OSCE. OSCE An in-person OSCE was developed and took place in conjunction with the Childhood Arthritis and Rheumatology Research Alliance Scientific Meeting in March 2023. A prompt posted outside of the OSCE room explained the clinical scenario: A patient with well-controlled polyarticular JIA, treated with methotrexate, with a normal peripheral joint examination, mentions having jaw pain with chewing. Fellows were instructed to take a TMJ-focused history and to perform a thorough examination of the TMJ. No time limit was imposed. Evaluators were blinded to the participant group. A content expert (CR) evaluated all the test patients for accurate measurements before the OSCE. Participants were scored on an evaluation form that listed all the key TMJ examination components. The maximum possible OSCE score was 18. Evaluators met before and after the OSCE to review scoring and ensure consistency. Follow-up survey All participants completed a 5-item follow-up survey three months after the OSCE. The survey assessed frequency of TMJ exam performance in clinical practice, confidence in TMJ examination skills, and satisfaction with the learning experience. Study data were collected and managed using REDCap (Research Electronic Data Capture) electronic data capture tools hosted at Children’s National Hospital . REDCap is a secure, web-based application designed to support data capture for research studies, providing (1) an intuitive interface for validated data entry; (2) audit trails for tracking data manipulation and export procedures; (3) automated export procedures for seamless data downloads to common statistical packages; and (4) procedures for importing data from external sources. Statistics Crosstab tables and Chi-squared tests were used to assess categorical variables across groups. For continuous measures when comparing the two groups, the Wilcoxon Rank Sum tests was used and when comparing paired data the Wilcoxon signed-rank test was used due to the small sample size. This study was deemed exempt by the Institutional Review Board of Children’s National Hospital and consent was waived (Pro00014662). A 25-minute e-learning module consisting of instructive video clips and interactive questions was developed utilizing the input of both TMJ content experts and a medical education e-learning team. The module includes background information about TMJ arthritis (2 min) and uses a video narrator to review the elements of a structured physical exam: history (1 min), palpation (2 min), assessment of deviation (5 min), inspection for symmetry and profile changes (6 min), and measurements of range of motion (6 min). The module provides a mnemonic for the steps of the physical exam: STOMPS: S = symptoms (history); T = TMJ palpation; O = opening deviation; M = maximal unassisted mouth opening; P = profile; S = symmetry. Throughout the module there are 8 interactive components, spaced between video demonstrations. These include multiple choice questions, simulated measurements and calculations, and identification of anatomic landmarks. At the conclusion of the module, the entire TMJ exam is demonstrated, beginning to end, in less than 3 min. Consent was obtained to use demonstrative patient photographs. The module used standardized operational terminology and definitions for patients with JIA . The platform housing the module displayed the most recent logon but did not capture how long each participant spent viewing the module nor how many times it was reviewed. During the study, the module was embargoed; following the conclusion of this study, the module became freely available and can be accessed by creating an account . A written assessment was developed to evaluate knowledge acquisition among fellows following the e-learning intervention. A convenience sample of pediatrics residents and oral and maxillofacial surgery (OMS) trainees participated in beta testing to assess baseline knowledge and ensure that our assessment tool discriminated between content-naïve pediatrics residents and mastery-level OMS trainees. Questions that did not discriminate well between pediatric resident knowledge (which was assumed to be minimal) and OMS residents (which was assumed to be adequate) were modified until they performed well as a tool for discriminating between groups of trainees. The final test consisted of 10-items, weighted by content, with a maximum possible score of 7; the pre-test was identical to the post-test. Power calculation was based on an alpha of 0.05 and a beta of 0.80. Sigma was calculated based on the difference between the mean performances on pre-test validation of 27 pediatrics residents (4.85, SD 1.1) and 9 OMS residents (6.44, SD 1.24). This suggested a sample size of 9 per group would adequately power the study for a non-inferiority knowledge assessment. Pediatric rheumatology fellows were recruited by email to participate in a study to assess the TMJ module. Fellows were offered a $50 gift card for their participation. Twenty-two fellows were recruited with 11 in each group. Block randomization was performed, and fellows were stratified by post-graduate year, with an equal number of first year fellows and more senior fellows in each group. All participants completed a written pre-test before the learning intervention. Two weeks later, they were given access to the learning materials.The control group received the published article that describes the TMJaw physical examination protocol , while the e-learning module intervention group received both the article and access to the e-learning module. Fellows received their learning intervention two weeks prior to their in-person objective structured clinical examination (OSCE) and written post-test evaluation. The learning intervention could be accessed multiple times at any point from its release two weeks before the OSCE until the day of the OSCE. An in-person OSCE was developed and took place in conjunction with the Childhood Arthritis and Rheumatology Research Alliance Scientific Meeting in March 2023. A prompt posted outside of the OSCE room explained the clinical scenario: A patient with well-controlled polyarticular JIA, treated with methotrexate, with a normal peripheral joint examination, mentions having jaw pain with chewing. Fellows were instructed to take a TMJ-focused history and to perform a thorough examination of the TMJ. No time limit was imposed. Evaluators were blinded to the participant group. A content expert (CR) evaluated all the test patients for accurate measurements before the OSCE. Participants were scored on an evaluation form that listed all the key TMJ examination components. The maximum possible OSCE score was 18. Evaluators met before and after the OSCE to review scoring and ensure consistency. All participants completed a 5-item follow-up survey three months after the OSCE. The survey assessed frequency of TMJ exam performance in clinical practice, confidence in TMJ examination skills, and satisfaction with the learning experience. Study data were collected and managed using REDCap (Research Electronic Data Capture) electronic data capture tools hosted at Children’s National Hospital . REDCap is a secure, web-based application designed to support data capture for research studies, providing (1) an intuitive interface for validated data entry; (2) audit trails for tracking data manipulation and export procedures; (3) automated export procedures for seamless data downloads to common statistical packages; and (4) procedures for importing data from external sources. Crosstab tables and Chi-squared tests were used to assess categorical variables across groups. For continuous measures when comparing the two groups, the Wilcoxon Rank Sum tests was used and when comparing paired data the Wilcoxon signed-rank test was used due to the small sample size. This study was deemed exempt by the Institutional Review Board of Children’s National Hospital and consent was waived (Pro00014662). Twenty-two pediatric rheumatology fellows were enrolled; 11 fellows were randomized to receive the previously published TMJaw article alone (article group), and 11 fellows were randomized to receive both the article and the e-learning module (module group). Each group had an equal number of fellows from each year of training. Both groups self-reported that they spent an equal amount of time preparing for the OSCE: mean of 34.1 min in article group, 35.5 min in module group ( p = 0.57). All fellows in the module group accessed the module. Pre- and post-tests Out of a maximum score of 7, the article group scored a mean of 3.00 (SD 1.09) on the pre-test and 5.23 (SD 1.19) on the post-test ( p = 0.001). The module group scored a mean of 2.68 (SD 0.78) on the pre-test and 5.00 (SD 0.97) on the post-test ( p = 0.001). There was no statistically significant difference between the two groups on the postscore ( p = 0.59) or the difference between pre and post scores ( p = 0.96). The module participants were better able to identify the correct definition of maximal unassisted mouth opening (MUMO) on a multiple-choice question, although it was not statistically significant ( p = 0.06). On the pre-test, two participants in the article group and none in the module group answered the question ascertaining MUMO correctly. On the post-test, six participants in the article group and ten participants in the module group answered this question correctly ( p = 0.06). Both groups improved in recognition of patient profiles and facial asymmetry on photographs in the written evaluation. When asked if confident in their TMJ exam skills at baseline, 66% of the article group was not confident compared to 33% in the module group. This was not statistically significant. Overall, both groups reported an increase in confidence in their TMJ physical exam skills after the intervention (Fig. ). 73% in each group increased their confidence, with no statistical difference in the improvement rates between the two groups. However, this improvement in each group was significantly different ( p < 0.001). OSCE Out of a total score of 18, the mean score on the OSCE was 11.1 (SD 3.3) in the article group and 13.5 (SD 1.9) in the module group, though this did not reach statistical significance ( p = 0.06). Significant differences were seen in learning domains related to measuring the maximal incisal opening (MIO), calculating the MUMO, and assessment of facial symmetry (Table ), with the module group having superior measurement scores in all three domains. There were no significant differences between the groups on their performance related to history taking, palpating the TMJ in the correct location, assessing for mandibular deviation, or evaluating facial profile. When asked to rank how enjoyable the learning experience was, the article group rated a mean of 5.9/10, while the module group rated it 7.7/10, ( p = 0.017) (Fig. ). Follow up survey Three months after the OSCE, there was no difference in the reported rates of performing the full 3-minute TMJ examination protocol when evaluating new or follow-up JIA patients in practice. In the article group, 3 participants (27%) perform the protocol often or most of the time for new and follow-up JIA patients. In the module group, 5 participants (45.45%) perform the protocol often or most of the time for new JIA patients and 4 participants (36.36%) perform the protocol often or most of the time for follow-up JIA patients (Fig. ). One participant in the module group reported that they never performed the full TMJ exam protocol for follow up patients. All participants in the module group were moderately or very satisfied with the learning intervention compared with only 8/11 (72.73%) in the article group (Fig. ). Out of a maximum score of 7, the article group scored a mean of 3.00 (SD 1.09) on the pre-test and 5.23 (SD 1.19) on the post-test ( p = 0.001). The module group scored a mean of 2.68 (SD 0.78) on the pre-test and 5.00 (SD 0.97) on the post-test ( p = 0.001). There was no statistically significant difference between the two groups on the postscore ( p = 0.59) or the difference between pre and post scores ( p = 0.96). The module participants were better able to identify the correct definition of maximal unassisted mouth opening (MUMO) on a multiple-choice question, although it was not statistically significant ( p = 0.06). On the pre-test, two participants in the article group and none in the module group answered the question ascertaining MUMO correctly. On the post-test, six participants in the article group and ten participants in the module group answered this question correctly ( p = 0.06). Both groups improved in recognition of patient profiles and facial asymmetry on photographs in the written evaluation. When asked if confident in their TMJ exam skills at baseline, 66% of the article group was not confident compared to 33% in the module group. This was not statistically significant. Overall, both groups reported an increase in confidence in their TMJ physical exam skills after the intervention (Fig. ). 73% in each group increased their confidence, with no statistical difference in the improvement rates between the two groups. However, this improvement in each group was significantly different ( p < 0.001). Out of a total score of 18, the mean score on the OSCE was 11.1 (SD 3.3) in the article group and 13.5 (SD 1.9) in the module group, though this did not reach statistical significance ( p = 0.06). Significant differences were seen in learning domains related to measuring the maximal incisal opening (MIO), calculating the MUMO, and assessment of facial symmetry (Table ), with the module group having superior measurement scores in all three domains. There were no significant differences between the groups on their performance related to history taking, palpating the TMJ in the correct location, assessing for mandibular deviation, or evaluating facial profile. When asked to rank how enjoyable the learning experience was, the article group rated a mean of 5.9/10, while the module group rated it 7.7/10, ( p = 0.017) (Fig. ). Three months after the OSCE, there was no difference in the reported rates of performing the full 3-minute TMJ examination protocol when evaluating new or follow-up JIA patients in practice. In the article group, 3 participants (27%) perform the protocol often or most of the time for new and follow-up JIA patients. In the module group, 5 participants (45.45%) perform the protocol often or most of the time for new JIA patients and 4 participants (36.36%) perform the protocol often or most of the time for follow-up JIA patients (Fig. ). One participant in the module group reported that they never performed the full TMJ exam protocol for follow up patients. All participants in the module group were moderately or very satisfied with the learning intervention compared with only 8/11 (72.73%) in the article group (Fig. ). Recently, there has been increased recognition of the importance of the clinical examination of the TMJ in children with JIA. Along with the ankle, wrist, hip, and sacroiliac joint, the TMJ is considered a key risk factor for poor outcomes in JIA and thus the presence of TMJ arthritis is conditionally recommended to guide treatment decisions . Recommendations for the management of orofacial manifestations of JIA have highlighted the critical role of a standardized clinical TMJ examination. We therefore used the TMJaw brief standardized orofacial examination to develop a novel educational module to teach TMJ examination. In our study, both the original article and the module were shown to be effective in improving learners’ knowledge of the TMJ examination in a sample of pediatric rheumatology fellows. The module was superior in teaching objective measurements of oral aperture. Accurate assessment of mouth opening has important clinical implications for children with JIA. Limited mouth opening and mandibular deviation at maximal mouth opening have been shown to be predictive of active TMJ inflammation in JIA. Assessment of mouth opening capacity is the most common outcome used to assess TMJ arthritis in clinical studies . There is a paucity of data regarding evidence-based pharmacologic treatment in patients with JIA and TMJ involvement. To our knowledge, there is no prospective study in the biologic era which addresses or provides specific guidance on TMJ arthritis management in JIA. TMJ outcomes are not routinely evaluated in clinical trials for JIA. Of 224 clinical trials involving JIA, only six studies reported on TMJ status . While standardized terminology to describe TMJ arthritis in JIA exists , it is still not widely used. This lack of consistent assessment and description can lead to discrepancies and difficulty comparing research studies, which may impede meaningful advancement in understanding of TMJ involvement in JIA. The question of optimal management of TMJ arthritis can only be addressed with use of standardized TMJ examination and inclusion of TMJ outcomes in prospective, controlled trials of patients with juvenile arthritis. We herein describe the first e-learning module that teaches JIA TMJ examination best practices. The e-module can be completed within 25-minutes and uses interactive techniques and adult learning theory to teach the five key TMJ examination domains, using the STOMPS mnemonic as a guide. Fellows using the e-learning module rated their enjoyment of the model as a 7.7/10, which significantly exceeded fellows’ enjoyment of the article as a means of learning the TMJ physical examination. It is important to note that both groups improved significantly in their post-test knowledge levels and confidence after reviewing the materials, meaning that the article and the e-module were both effective learning tools. The e-module was superior for teaching fellows to perform measurement techniques, which is the most complex of the physical examination techniques included in the TMJ protocol. It is also notable that the fellows assigned the e-learning module enjoyed the learning experience more than those who used the article alone to prepare for the OSCE. In a real-world setting, where trainees would not be expected to perform an OSCE following an assigned curriculum, fellows may be more likely to complete an enjoyable e-learning self-study module than they would be to spend the same amount of time reading an article. Although the learning materials were made available to the trainees 2 weeks prior to the OSCE, most participants viewed the module the day of the OSCE. Fellows using the 25 min-module, with access to the article, self-reported spending an average of 35.5 min preparing for the OSCE, which is comparable to the article group, which reported spending 34.1 min using the article to prepare. It is difficult to ascertain whether the participants in the module group accessed the article and duration spent in the module. Unfortunately, after 3 months, less than half of all participants reported consistently performing the full TMJ examination protocol in practice, and there was no difference between the groups. One participant in the module group reported never performing the TMJ exam for follow up patients. While there was improvement in rates of performing this examination, barriers likely still exist but were not specifically assessed in this study. In a large multinational JIA registry, the TMJ was measured quantitatively in 8% of visits , compared with 27–45% of the learners who completed this study, so there is reason to believe that a structured curriculum in TMJ examination may lead to increased dedicated examination of these joints. In terms of study weaknesses, the p values of the two groups were just shy of statistical significance, and a larger sample size may have led to more significant results. Unfortunately, while the study was powered for the pre/post-test, it was not possible to fully power the OSCE. While we are able to confirm that each participant assigned to the module did access the module, we are unable to ascertain the duration that the participants spent on the study materials and which material was used as their primary source. We did not measure inter-rater variability for the OSCE; however, all evaluators received the same training on scoring procedures and were blinded to trainees’ study groups. It is also important to note that an OSCE inherently forces observation in a simulated environment, which is an assessment of what one can do, rather than what one does in an authentic clinical setting. Nevertheless, OSCE exam scores have been shown to correlate with clinical evaluations in pediatric residents , and the ability to effectively examine the TMJ will enable a trainee to get more effective data if/when wanting to assess for disease activity. We note that video monitoring could have provided more objective scoring of the trainees and allowed specific feedback to further improve trainees’ skills and satisfaction with the training. Finally, sustained impact could have been assessed by performing a follow up OSCE. However, this was not logistically feasible to coordinate. There were several strengths to this study. With expert collaboration, we developed a novel learning tool to teach a new standardized physical exam of the TMJ in JIA. The tool includes helpful video demonstrations, didactics, interactive questions, and a mnemonic to provide organization to the performance of the physical examination. To evaluate our tool, we conducted a randomized, blinded, in-person assessment of clinical skills among a target group of learners. All the subjects completed all the study activities including the follow-up survey 3 months later. While the article was also helpful in teaching many of the examination skills, the module was more effective in teaching the quantitative measurements. This 25-minute interactive e-learning module improved pediatric rheumatology trainees’ understanding of the TMJ examination (knowledge) and resulted in better ability to perform MIO and MUMO measurements (skills). The module’s ability to effectively teach quantitative measurement is critical both for clinical care and for standardization in TMJ research studies. Trainees who participated in the interactive module reported more enjoyment of the learning process. Widespread use of this TMJ educational module will help improve clinical care by increasing early detection of TMJ involvement in patients with JIA, prompting appropriate treatment and ongoing monitoring. Future steps include further refinement of the content to focus on areas where learners had less improvement. The material will be included in an extended training event for pediatric rheumatology fellows and additionally will be offered to researchers who collect JIA measures. Supplementary Material 1.
The impact of smash-ridge tillage on agronomic traits of tobacco plants, soil enzymatic activity, microbial community structure, and functional diversity
a5247ca6-f7a4-4f84-b83b-aacf6fb72754
10730138
Microbiology[mh]
Introduction In China, tobacco is a significant cash crop that provides substantial economic benefits, prompting farmers to engage in continuous tobacco cultivation on the same land to maximize income. Due to the labor-intensive and technologically demanding nature of tobacco cultivation, farmers often adhere to fixed plots in order to minimize costs, improve efficiency, and exhibit reluctance to change locations or diversify crops . Continuous tobacco cropping can have adverse effects on soil quality, microbial communities, and the yield and quality of tobacco. Continuous cropping leads to soil acidification, resulting in increased phenolic acid content and decreased bacterial diversity, which can disrupt the structure of bacterial communities . Therefore, the application of modern and improved scientific farming methods is particularly important. Smash-ridge tillage is a deep tillage technique that combines the advantages of deep loosening, deep tillage, rotary tillage, and vertical tillage , which can break compacted plowshares and loosen the soil layer without disturbing it and producing new hard plows. Smash-ridging cultivation is a novel farming method that significantly enhances soil quality, water and nutrient retention, root development, and crop yield . This method increases soil porosity, organic matter, and available nutrients while reducing bulk density and pH . Furthermore, studies show that this cultivation method can improve soil structure, water retention, nutrient availability, root growth, forage yield, and quality . It has been applied to various crops across different provinces in China, Studies have shown that the fine ridge tillage mode has stronger beneficial effects on soil quality, water storage capacity, crop yield, root vitality, soil microbial biomass and microbial diversity in farmland , . In recent years, the flour ridge tillage technique has achieved great success in agricultural production. However, the effects of silty tillage on soil microbial communities and soil ecosystem functions in cultivated land are currently unknown. In this study, we employed high-throughput 16S rRNA sequencing and Biolog-ECO methods to systematically investigate the impact of smash-ridge tillage on soil microbial community structure and functional diversity. The objectives of this study were as follows: (1) to assess the variations in soil enzyme activity, microbial diversity, and community structure in cultivated land under different farming methods; (2) to analyze the functional diversity and groups of soil microbial communities under various tillage methods; and (3) to examine how smash-ridge tillage influences the relationship between soil microbial structure, enzyme activity, and microbial functional diversity. The anticipated outcomes of this research aim to provide a theoretical foundation and technical support for addressing the challenges associated with continuous tobacco cropping and enhancing tobacco production. Methods 2.1 Soil sampling The study site was located in Zhaohua County, Guangyuan City, Sichuan Province (32° 0’ 54.92” N, 105° 26’ 39.09” E). The area of flue-cured tobacco in Zhaohua County was 1946.6 HM2, making it one of the important producing areas in northern Sichuan Province. The region has a humid subtropical climate, with uneven rainfall distribution resulting in earlier drought and later flooding periods. The average annual temperature, rainfall, and sunshine length are 15.4°C, 1003.9 mm, and 1328.3 hours, respectively. The soil type is calcareous purple soil, and the basic physical and chemical indexes of the sample soil are shown in . The tested variety was “Zhongchuan 208.” Three treatments were established, namely, traditional cultivation with a depth of 20 cm (CK), powdered ridge cultivation with a depth of 30 cm (ST1), and powdered ridge cultivation with a depth of 50 cm (ST2). In this experiment, a suspended powdered ridge machine was used to fully loosen and vertically grind the soil, followed by compaction of the powdered ridges. The control group used traditional cultivation with tractor plowing and harrowing. The land was prepared in March 2021, tobacco seedlings were transplanted on April 5, and the harvest took place on September 27. Soil samples were collected on September 28. The soil samples were taken using a five-point sampling method at a depth of 20 cm in the plowed layer, with five replicate samples for each soil. The soil samples were placed in sterile bags, sieved with a 2 mm sieve after removing stones, and stored at 4°C for soil enzyme activity and Biology-ECO analysis. Fresh soil samples of 50 g were sieved with a 1 mm sieve and stored at −80°C in an ultra-low temperature freezer for the determination of soil microbial community structure. 2.2 Measurement of agronomic traits of tobacco plants Measurement of plant growth indicators was conducted on selected 15 tobacco plants from each continuous cropping year plot after 110 days of plant growth. Plant height: The distance from the soil surface to the topmost point of the stem. Leaf area: A leaf was selected at the waist level, and its length and width were measured. The length was measured from the leaf base to the leaf tip, and the width was measured at the widest point. The leaf area was calculated using a conversion coefficient of 0.63. Leaf area = Length × Width × Conversion coefficient Stem circumference: The lowermost point of the aboveground section of the tobacco plant was established as the initial reference point. Using a flexible measuring tape positioned at a point located one-third of the plant’s overall height from the starting position, the stem circumference of the tobacco plant was measured. Effective leaf number: During the actual cultivation process, tobacco plants underwent leaf removal during the seedling stage and topping treatment during the field period. The lower foot leaves were also removed. The remaining number of leaves was considered the effective leaf number. Fresh root weight: After the tobacco roots were dug out from the soil and washed, the surface moisture was absorbed with absorbent paper or filter paper, and the fresh root weight was measured using an electronic balance. Fresh stem weight: After removing the underground part and leaves of the tobacco plant, the fresh weight of the stem per plant was measured using an electronic balance. 2.3 Determination of soil enzyme activity Urease activity: For each sample, 2–5 g of air-dried soil was weighed into a 50 mL centrifuge tube. Then, 1 mL of toluene was added to ensure the complete wetting of the soil sample. After 15 minutes, 10 mL of 10% urea solution and 20 mL of citric acid buffer solution (pH = 6.7) were added, followed by thorough shaking. The conical flask was placed in a constant temperature chamber at 37°C and incubated for 24 hours. After incubation, the solution was diluted to the mark with water at 38°C, thoroughly shaken, centrifuged, and filtered. A portion (1–3 mL) of the filtrate was drawn into a 50 mL colorimetric tube, diluted to 20 mL with distilled water, vigorously shaken, followed by the addition of 4 mL of phenol-sodium solution and thorough mixing. Then, 3 mL of sodium hypochlorite was added, shaken well, and left to stand for 20 minutes. The solution was further diluted to the mark with water, resulting in an indophenol blue color (maintained stable for up to 1 hour). Colorimetric measurement was performed at 578 nm . U r e a s e a c t i v i t y = m 1 − m 2 − m 3 × V × n m m1: the milligrams of NH 3 -N derived from the sample absorbance value obtained from the standard curve; m2: the milligrams of NH3-N obtained from the no soil control absorbance value from the standard curve; m3: the milligrams of NH3-N obtained from the no substrate control absorbance value from the standard curve; V: the volume of the color solution; N: the ratio of the leaching liquid volume to the filtrate volume; M: the weight of dried soil. Nitrite reductase activity: Taking 1 g of fresh soil sample in a 100 ml plastic bottle, 20 mg of CaCO3 and 1 ml of KNO3 solution were added. After thorough mixing, 1 ml of glucose solution was added, while in another variant, an equal amount of distilled water was added as a substitute for the substrate. The bottle was securely capped, lightly shaken, and placed in a constant temperature incubator at 30°C for 24 hours. Simultaneously, a reagent blank control was set up. After incubation, 50 ml of deionized water and 1 ml of aluminum potassium sulfate solution were added, followed by a 20-minute settling period. After thorough mixing, the solution was filtered. Subsequently, 20 ml of the filtrate was transferred to a porcelain evaporating dish (using a conical flask) and evaporated in a water bath. Then, 2 ml of phenol-sulfuric acid solution was added for dissolution, followed by a 10-minute treatment. Afterward, 15 ml of deionized water was added, and the solution was adjusted to a yellow using 10% NaOH. Finally, the solution was transferred to a 50 ml volumetric flask, brought to volume, and colorimetrically measured at 400 ~ 500 nm . N i t r i t e r e d u c t a s e a c t i v i t y = m 1 − m 2 − m 3 × V × f d w t C1: The absorbance value of the no-soil blank, representing the calculated NO3-N content from the standard curve; C2: The absorbance value of the sample, representing the calculated NO3-N content from the standard curve; C3: The absorbance value of the soil without substrate (potassium nitrate), representing the calculated NO3-N content from the standard curve (optional at this stage); V: Volume of the test solution (50 ml); f: Fractional factor (50 ml/20 ml);dwt: Dry weight of the soil. Alkaline phosphatase activity: Take an appropriate amount of air-dried soil and place it into a 200 ml Erlenmeyer flask. Treat with 5 drops of toluene for 15 minutes, then add 20 ml of 0.5% sodium phenylphosphonate. Place the reaction mixture in a 37°C constant temperature incubator and incubate for 24 hours. After incubation, filter the mixture through a dense filter paper using 100 ml of 0.3% aluminum sulfate solution. No-soil control: Exclude soil samples and follow the same procedures as the sample experiments. This control is established to verify the purity of reagents throughout the experiment. No-substrate control: For each soil sample, replace the substrate with an equal volume of water, following the same procedures as the sample experiments. Colorimetric procedure: Transfer 3 ml of the filtrate to a 50 ml volumetric flask and follow the coloration method described for constructing the standard curve. When using borate buffer solution, the color appears blue, and colorimetric measurement is performed at 510 nm using a spectrophotometer. Construction of the standard curve: Take 0, 1, 3, 5, 7, 9, 11, and 13 ml of the phenol working solution (b), place them in 50 ml volumetric flasks, add 5 ml of buffer solution and 4 drops of chlorodibromobenzene quinone reagent. After coloration, dilute to the mark, and perform colorimetric measurement after 30 minutes. Construct a standard curve with optical density on the y-axis and concentration on the x-axis . N i t r i t e r e d u c t a s e a c t i v i t y = a 1 − a 2 − a 3 × V × n m a1: the milligrams of phenol derived from the sample absorbance value obtained from the standard curve; a2: the milligrams of phenol obtained from the no-soil control absorbance value from the standard curve; a3: the milligrams of phenol obtained from the no-substrate control absorbance value from the standard curve; V: the volume of the color solution; n: the ratio of the leaching liquid volume to the filtrate volume; m: the weight of dried soil. Aryl sulfatase activity: Place 1.00 g of fresh soil sample (<2 mm) into a 50 mL Erlenmeyer flask, add 0.2 mL of toluene, 4 mL of buffer solution, and 1 mL of potassium picrate solution. To eliminate the color influence of soil leachate, a control should be performed by adding 0.2 mL of toluene and 4 mL of buffer solution to 1.00 g of soil sample. Gently shake to mix, close with a stopper, and incubate at 37°C for 1 hour. After incubation, remove the lid, add 1 mL of CaCl2 solution and 4 mL of NaOH solution. For the control, add 1 mL of sodium picrate solution and shake briefly for a few seconds, then immediately filter through filter paper. Simultaneously, prepare the solutions for constructing the working curve. Dilute the prepared standard solution by 100 times with water, then draw 1 mL, 2 mL, 3 mL, 4 mL, and 5 mL of the diluted standard solution into separate 50 mL Erlenmeyer flasks (containing 0 mg, 0.01 mg, 0.02 mg, 0.03 mg, 0.04 mg, and 0.05 mg of picrate). Adjust each to 5 mL with water, add 1 mL of CaCl2 and 4 mL of NaOH solution, shake briefly, and then filter through filter paper. Perform colorimetric analysis under the same conditions. Use a spectrophotometer to measure the absorbance of the solution at 400 nm − 420 nm . A r y l s u l f a t a s e a c t i v i t y = m 1 m 2 × k w: The amount of picrate produced per unit of time; m1: The mass of picrate in the test solution; m2: The mass of the sample; K: The moisture coefficient. β-glucosidase activity: Place 1 g of fresh soil sample (<2 mm) in a 100 ml Erlenmeyer flask, add toluene, allow ventilation in a fume hood for 10 minutes, then add 4 ml of pH-adjusted MUB solution and 1 mm of PNPG solution. Close the flask tightly, thoroughly mix the contents, and incubate at 37°C for 1 hour. Add 1 ml of CaCl2 solution and 4 ml of pH-adjusted Tris buffer, shake well, rapidly filter through fast filter paper, and perform colorimetric measurement of the filtrate at 400 nm. For the blank control, after incubation, before adding CaCl2 and Tris buffer, introduce 1 ml of buffer solution (substituting for 1 ml of substrate solution). Each sample requires a blank control and three replicates. Standard curve: draw 0, 1, 2, 3, 4, 5, and 6 ml into a 100 ml volumetric flask, dilute to 5 ml with deionized water, then add 1 ml of calcium chloride solution and 4 ml of Tris solution. Thoroughly mix, adjust to 50 ml, filter, and perform colorimetric measurement at 400 nm . β − g l u c o s i d a s e a c t i v i t y = C × V d w t C: the content of picric acid in the sample; V: the volume of the soil solution; dwt: the dry soil mass. Determination of soil microbial functional diversity The microbial carbon source utilization characteristics in soil were determined using the Biolog method. A total of 5.0 g of soil sample was weighed and added to a triangular flask containing 45 mL of 0.85% sterile physiological saline. The flask was shaken at 150 rpm for 30 minutes at 25°C, followed by a 1-minute ice bath and 30 minutes of settling to obtain a microbial suspension of the soil sample. The soil suspension underwent a substantial 1000-fold dilution. Employing a micropipette within aseptic conditions provided by a laminar flow cabinet, the meticulously prepared soil suspension was systematically introduced into the individual wells of the Biolog microplate, each accommodating precisely 150 μL of the suspension. After securely sealing the plates, they were meticulously positioned within a 25°C incubation chamber, ensuring an uninterrupted period of 7 days for incubation under darkness. Periodic measurements were recorded at a wavelength of 590 nm, conducted at 24-hour intervals. The average well color development (AWCD) of microbial metabolic activity was calculated using the formula: A W C D = 1 n ∑ i = 1 n ( C i − R ) Where Ci is the absorbance value of each well at 590 nm, R is the absorbance value of the control well at 590 nm, and n represents the number of carbon sources in the EcoPlate. The ability of soil microbes to utilize each category of carbon sources was determined by calculating the AWCD values for each category of carbon sources in the EcoPlate, using the formula: A W C D = 1 n ∑ i = 1 n ( C m − R ) Where Cm is the absorbance value of each category of carbon source at 590 nm, R is the absorbance value of the control well at 590 nm, and n represents the number of that particular category of carbon sources in the EcoPlate. 2.4 Methods for studying soil microbial community structure and functional composition DNA extraction and PCR amplification Genomic DNA of soil samples was extracted using the CTAB method, and the purity and concentration of DNA were detected by 1% agarose gel electrophoresis. A suitable amount of sample was taken in a centrifuge tube and diluted with sterile water to a concentration of 1 ng·µL-1. For bacterial sequencing, the V3 and V4 variable regions were amplified by PCR using primers 341F (5’-CCTAYGGGRBGCASCAG-3’) and 806 R (5’-GGACTACNNGGGTATCTAAT-3’). For fungal community sequencing, the ITS3 and ITS4 regions were amplified by PCR using primers 2024F (5’-GCATCGATGAAGAACGCAGC-3’) and 2409 R (5’-TCCTCCGCTTATTGATATGC-3’). All PCR products were analyzed by 2% agarose gel electrophoresis. Illumina novaseq sequencing PCR products with equal concentrations were thoroughly mixed. The mixed products were then purified using a 2% agarose gel with 1×TAE buffer. The target bands were excised and recovered using the Qifollowagen Gel Extraction Kit. The prepared libraries were constructed using the Illumina TruSeq® DNA PCR-Free Sample Preparation Kit (Illumina, USA). The quantification and quality control of the libraries were performed using Qubit and library detection. After passing the quality control, the libraries were sequenced using the NovaSeq 6000 PE250 platform. Bioinformatics Analysis The analysis process mainly followed the “Atacama soil microbiome tutorial” in the QIIME2 documentation. The raw sequence fastq files were imported into a format suitable for subsequent processing in QIIME2 using the qiime tools import plugin. Subsequently, the QIIME2 dada2 plugin was utilized for quality control, trimming, denoising, merging, and removing chimeras, resulting in the final feature sequence table. The QIIME2 feature-classifier plugin was employed to align the representative sequences of the Amplicon Sequence Variants (ASVs) against the pre-trained 13_8 version of the GREENGENES database with 99% similarity (trimmed to the V3-V4 region based on the 341F/806 R primers), obtaining the taxonomic classification information of the species . Next, the QIIME2 feature-table plugin was used to remove all contaminating mitochondrial and chloroplast sequences. The LEfSe method was then applied to identify differentially abundant bacteria among groups and samples. Subsequently, the QIIME2 core-diversity plugin was employed to calculate the diversity matrix, including alpha diversity indices at the feature sequence level (Chao1, Ace, Shannon, and Simpson indices) to assess the biodiversity within samples. Beta diversity indices of samples were visualized using NMDS plots. The FAPROTAX software was used to predict the potential functional composition of the microbial community. 2.5 Data analysis Statistical significance tests were conducted using the R girmess package, multcompView package, and agricolae package. PCA analysis was performed using the R FactoMineR package. Random forest regression analysis was conducted using the R RandomForest package. Data visualization was carried out using the R ggplot2 package. Soil sampling The study site was located in Zhaohua County, Guangyuan City, Sichuan Province (32° 0’ 54.92” N, 105° 26’ 39.09” E). The area of flue-cured tobacco in Zhaohua County was 1946.6 HM2, making it one of the important producing areas in northern Sichuan Province. The region has a humid subtropical climate, with uneven rainfall distribution resulting in earlier drought and later flooding periods. The average annual temperature, rainfall, and sunshine length are 15.4°C, 1003.9 mm, and 1328.3 hours, respectively. The soil type is calcareous purple soil, and the basic physical and chemical indexes of the sample soil are shown in . The tested variety was “Zhongchuan 208.” Three treatments were established, namely, traditional cultivation with a depth of 20 cm (CK), powdered ridge cultivation with a depth of 30 cm (ST1), and powdered ridge cultivation with a depth of 50 cm (ST2). In this experiment, a suspended powdered ridge machine was used to fully loosen and vertically grind the soil, followed by compaction of the powdered ridges. The control group used traditional cultivation with tractor plowing and harrowing. The land was prepared in March 2021, tobacco seedlings were transplanted on April 5, and the harvest took place on September 27. Soil samples were collected on September 28. The soil samples were taken using a five-point sampling method at a depth of 20 cm in the plowed layer, with five replicate samples for each soil. The soil samples were placed in sterile bags, sieved with a 2 mm sieve after removing stones, and stored at 4°C for soil enzyme activity and Biology-ECO analysis. Fresh soil samples of 50 g were sieved with a 1 mm sieve and stored at −80°C in an ultra-low temperature freezer for the determination of soil microbial community structure. Measurement of agronomic traits of tobacco plants Measurement of plant growth indicators was conducted on selected 15 tobacco plants from each continuous cropping year plot after 110 days of plant growth. Plant height: The distance from the soil surface to the topmost point of the stem. Leaf area: A leaf was selected at the waist level, and its length and width were measured. The length was measured from the leaf base to the leaf tip, and the width was measured at the widest point. The leaf area was calculated using a conversion coefficient of 0.63. Leaf area = Length × Width × Conversion coefficient Stem circumference: The lowermost point of the aboveground section of the tobacco plant was established as the initial reference point. Using a flexible measuring tape positioned at a point located one-third of the plant’s overall height from the starting position, the stem circumference of the tobacco plant was measured. Effective leaf number: During the actual cultivation process, tobacco plants underwent leaf removal during the seedling stage and topping treatment during the field period. The lower foot leaves were also removed. The remaining number of leaves was considered the effective leaf number. Fresh root weight: After the tobacco roots were dug out from the soil and washed, the surface moisture was absorbed with absorbent paper or filter paper, and the fresh root weight was measured using an electronic balance. Fresh stem weight: After removing the underground part and leaves of the tobacco plant, the fresh weight of the stem per plant was measured using an electronic balance. Determination of soil enzyme activity Urease activity: For each sample, 2–5 g of air-dried soil was weighed into a 50 mL centrifuge tube. Then, 1 mL of toluene was added to ensure the complete wetting of the soil sample. After 15 minutes, 10 mL of 10% urea solution and 20 mL of citric acid buffer solution (pH = 6.7) were added, followed by thorough shaking. The conical flask was placed in a constant temperature chamber at 37°C and incubated for 24 hours. After incubation, the solution was diluted to the mark with water at 38°C, thoroughly shaken, centrifuged, and filtered. A portion (1–3 mL) of the filtrate was drawn into a 50 mL colorimetric tube, diluted to 20 mL with distilled water, vigorously shaken, followed by the addition of 4 mL of phenol-sodium solution and thorough mixing. Then, 3 mL of sodium hypochlorite was added, shaken well, and left to stand for 20 minutes. The solution was further diluted to the mark with water, resulting in an indophenol blue color (maintained stable for up to 1 hour). Colorimetric measurement was performed at 578 nm . U r e a s e a c t i v i t y = m 1 − m 2 − m 3 × V × n m m1: the milligrams of NH 3 -N derived from the sample absorbance value obtained from the standard curve; m2: the milligrams of NH3-N obtained from the no soil control absorbance value from the standard curve; m3: the milligrams of NH3-N obtained from the no substrate control absorbance value from the standard curve; V: the volume of the color solution; N: the ratio of the leaching liquid volume to the filtrate volume; M: the weight of dried soil. Nitrite reductase activity: Taking 1 g of fresh soil sample in a 100 ml plastic bottle, 20 mg of CaCO3 and 1 ml of KNO3 solution were added. After thorough mixing, 1 ml of glucose solution was added, while in another variant, an equal amount of distilled water was added as a substitute for the substrate. The bottle was securely capped, lightly shaken, and placed in a constant temperature incubator at 30°C for 24 hours. Simultaneously, a reagent blank control was set up. After incubation, 50 ml of deionized water and 1 ml of aluminum potassium sulfate solution were added, followed by a 20-minute settling period. After thorough mixing, the solution was filtered. Subsequently, 20 ml of the filtrate was transferred to a porcelain evaporating dish (using a conical flask) and evaporated in a water bath. Then, 2 ml of phenol-sulfuric acid solution was added for dissolution, followed by a 10-minute treatment. Afterward, 15 ml of deionized water was added, and the solution was adjusted to a yellow using 10% NaOH. Finally, the solution was transferred to a 50 ml volumetric flask, brought to volume, and colorimetrically measured at 400 ~ 500 nm . N i t r i t e r e d u c t a s e a c t i v i t y = m 1 − m 2 − m 3 × V × f d w t C1: The absorbance value of the no-soil blank, representing the calculated NO3-N content from the standard curve; C2: The absorbance value of the sample, representing the calculated NO3-N content from the standard curve; C3: The absorbance value of the soil without substrate (potassium nitrate), representing the calculated NO3-N content from the standard curve (optional at this stage); V: Volume of the test solution (50 ml); f: Fractional factor (50 ml/20 ml);dwt: Dry weight of the soil. Alkaline phosphatase activity: Take an appropriate amount of air-dried soil and place it into a 200 ml Erlenmeyer flask. Treat with 5 drops of toluene for 15 minutes, then add 20 ml of 0.5% sodium phenylphosphonate. Place the reaction mixture in a 37°C constant temperature incubator and incubate for 24 hours. After incubation, filter the mixture through a dense filter paper using 100 ml of 0.3% aluminum sulfate solution. No-soil control: Exclude soil samples and follow the same procedures as the sample experiments. This control is established to verify the purity of reagents throughout the experiment. No-substrate control: For each soil sample, replace the substrate with an equal volume of water, following the same procedures as the sample experiments. Colorimetric procedure: Transfer 3 ml of the filtrate to a 50 ml volumetric flask and follow the coloration method described for constructing the standard curve. When using borate buffer solution, the color appears blue, and colorimetric measurement is performed at 510 nm using a spectrophotometer. Construction of the standard curve: Take 0, 1, 3, 5, 7, 9, 11, and 13 ml of the phenol working solution (b), place them in 50 ml volumetric flasks, add 5 ml of buffer solution and 4 drops of chlorodibromobenzene quinone reagent. After coloration, dilute to the mark, and perform colorimetric measurement after 30 minutes. Construct a standard curve with optical density on the y-axis and concentration on the x-axis . N i t r i t e r e d u c t a s e a c t i v i t y = a 1 − a 2 − a 3 × V × n m a1: the milligrams of phenol derived from the sample absorbance value obtained from the standard curve; a2: the milligrams of phenol obtained from the no-soil control absorbance value from the standard curve; a3: the milligrams of phenol obtained from the no-substrate control absorbance value from the standard curve; V: the volume of the color solution; n: the ratio of the leaching liquid volume to the filtrate volume; m: the weight of dried soil. Aryl sulfatase activity: Place 1.00 g of fresh soil sample (<2 mm) into a 50 mL Erlenmeyer flask, add 0.2 mL of toluene, 4 mL of buffer solution, and 1 mL of potassium picrate solution. To eliminate the color influence of soil leachate, a control should be performed by adding 0.2 mL of toluene and 4 mL of buffer solution to 1.00 g of soil sample. Gently shake to mix, close with a stopper, and incubate at 37°C for 1 hour. After incubation, remove the lid, add 1 mL of CaCl2 solution and 4 mL of NaOH solution. For the control, add 1 mL of sodium picrate solution and shake briefly for a few seconds, then immediately filter through filter paper. Simultaneously, prepare the solutions for constructing the working curve. Dilute the prepared standard solution by 100 times with water, then draw 1 mL, 2 mL, 3 mL, 4 mL, and 5 mL of the diluted standard solution into separate 50 mL Erlenmeyer flasks (containing 0 mg, 0.01 mg, 0.02 mg, 0.03 mg, 0.04 mg, and 0.05 mg of picrate). Adjust each to 5 mL with water, add 1 mL of CaCl2 and 4 mL of NaOH solution, shake briefly, and then filter through filter paper. Perform colorimetric analysis under the same conditions. Use a spectrophotometer to measure the absorbance of the solution at 400 nm − 420 nm . A r y l s u l f a t a s e a c t i v i t y = m 1 m 2 × k w: The amount of picrate produced per unit of time; m1: The mass of picrate in the test solution; m2: The mass of the sample; K: The moisture coefficient. β-glucosidase activity: Place 1 g of fresh soil sample (<2 mm) in a 100 ml Erlenmeyer flask, add toluene, allow ventilation in a fume hood for 10 minutes, then add 4 ml of pH-adjusted MUB solution and 1 mm of PNPG solution. Close the flask tightly, thoroughly mix the contents, and incubate at 37°C for 1 hour. Add 1 ml of CaCl2 solution and 4 ml of pH-adjusted Tris buffer, shake well, rapidly filter through fast filter paper, and perform colorimetric measurement of the filtrate at 400 nm. For the blank control, after incubation, before adding CaCl2 and Tris buffer, introduce 1 ml of buffer solution (substituting for 1 ml of substrate solution). Each sample requires a blank control and three replicates. Standard curve: draw 0, 1, 2, 3, 4, 5, and 6 ml into a 100 ml volumetric flask, dilute to 5 ml with deionized water, then add 1 ml of calcium chloride solution and 4 ml of Tris solution. Thoroughly mix, adjust to 50 ml, filter, and perform colorimetric measurement at 400 nm . β − g l u c o s i d a s e a c t i v i t y = C × V d w t C: the content of picric acid in the sample; V: the volume of the soil solution; dwt: the dry soil mass. Determination of soil microbial functional diversity The microbial carbon source utilization characteristics in soil were determined using the Biolog method. A total of 5.0 g of soil sample was weighed and added to a triangular flask containing 45 mL of 0.85% sterile physiological saline. The flask was shaken at 150 rpm for 30 minutes at 25°C, followed by a 1-minute ice bath and 30 minutes of settling to obtain a microbial suspension of the soil sample. The soil suspension underwent a substantial 1000-fold dilution. Employing a micropipette within aseptic conditions provided by a laminar flow cabinet, the meticulously prepared soil suspension was systematically introduced into the individual wells of the Biolog microplate, each accommodating precisely 150 μL of the suspension. After securely sealing the plates, they were meticulously positioned within a 25°C incubation chamber, ensuring an uninterrupted period of 7 days for incubation under darkness. Periodic measurements were recorded at a wavelength of 590 nm, conducted at 24-hour intervals. The average well color development (AWCD) of microbial metabolic activity was calculated using the formula: A W C D = 1 n ∑ i = 1 n ( C i − R ) Where Ci is the absorbance value of each well at 590 nm, R is the absorbance value of the control well at 590 nm, and n represents the number of carbon sources in the EcoPlate. The ability of soil microbes to utilize each category of carbon sources was determined by calculating the AWCD values for each category of carbon sources in the EcoPlate, using the formula: A W C D = 1 n ∑ i = 1 n ( C m − R ) Where Cm is the absorbance value of each category of carbon source at 590 nm, R is the absorbance value of the control well at 590 nm, and n represents the number of that particular category of carbon sources in the EcoPlate. The microbial carbon source utilization characteristics in soil were determined using the Biolog method. A total of 5.0 g of soil sample was weighed and added to a triangular flask containing 45 mL of 0.85% sterile physiological saline. The flask was shaken at 150 rpm for 30 minutes at 25°C, followed by a 1-minute ice bath and 30 minutes of settling to obtain a microbial suspension of the soil sample. The soil suspension underwent a substantial 1000-fold dilution. Employing a micropipette within aseptic conditions provided by a laminar flow cabinet, the meticulously prepared soil suspension was systematically introduced into the individual wells of the Biolog microplate, each accommodating precisely 150 μL of the suspension. After securely sealing the plates, they were meticulously positioned within a 25°C incubation chamber, ensuring an uninterrupted period of 7 days for incubation under darkness. Periodic measurements were recorded at a wavelength of 590 nm, conducted at 24-hour intervals. The average well color development (AWCD) of microbial metabolic activity was calculated using the formula: A W C D = 1 n ∑ i = 1 n ( C i − R ) Where Ci is the absorbance value of each well at 590 nm, R is the absorbance value of the control well at 590 nm, and n represents the number of carbon sources in the EcoPlate. The ability of soil microbes to utilize each category of carbon sources was determined by calculating the AWCD values for each category of carbon sources in the EcoPlate, using the formula: A W C D = 1 n ∑ i = 1 n ( C m − R ) Where Cm is the absorbance value of each category of carbon source at 590 nm, R is the absorbance value of the control well at 590 nm, and n represents the number of that particular category of carbon sources in the EcoPlate. Methods for studying soil microbial community structure and functional composition DNA extraction and PCR amplification Genomic DNA of soil samples was extracted using the CTAB method, and the purity and concentration of DNA were detected by 1% agarose gel electrophoresis. A suitable amount of sample was taken in a centrifuge tube and diluted with sterile water to a concentration of 1 ng·µL-1. For bacterial sequencing, the V3 and V4 variable regions were amplified by PCR using primers 341F (5’-CCTAYGGGRBGCASCAG-3’) and 806 R (5’-GGACTACNNGGGTATCTAAT-3’). For fungal community sequencing, the ITS3 and ITS4 regions were amplified by PCR using primers 2024F (5’-GCATCGATGAAGAACGCAGC-3’) and 2409 R (5’-TCCTCCGCTTATTGATATGC-3’). All PCR products were analyzed by 2% agarose gel electrophoresis. Illumina novaseq sequencing PCR products with equal concentrations were thoroughly mixed. The mixed products were then purified using a 2% agarose gel with 1×TAE buffer. The target bands were excised and recovered using the Qifollowagen Gel Extraction Kit. The prepared libraries were constructed using the Illumina TruSeq® DNA PCR-Free Sample Preparation Kit (Illumina, USA). The quantification and quality control of the libraries were performed using Qubit and library detection. After passing the quality control, the libraries were sequenced using the NovaSeq 6000 PE250 platform. Bioinformatics Analysis The analysis process mainly followed the “Atacama soil microbiome tutorial” in the QIIME2 documentation. The raw sequence fastq files were imported into a format suitable for subsequent processing in QIIME2 using the qiime tools import plugin. Subsequently, the QIIME2 dada2 plugin was utilized for quality control, trimming, denoising, merging, and removing chimeras, resulting in the final feature sequence table. The QIIME2 feature-classifier plugin was employed to align the representative sequences of the Amplicon Sequence Variants (ASVs) against the pre-trained 13_8 version of the GREENGENES database with 99% similarity (trimmed to the V3-V4 region based on the 341F/806 R primers), obtaining the taxonomic classification information of the species . Next, the QIIME2 feature-table plugin was used to remove all contaminating mitochondrial and chloroplast sequences. The LEfSe method was then applied to identify differentially abundant bacteria among groups and samples. Subsequently, the QIIME2 core-diversity plugin was employed to calculate the diversity matrix, including alpha diversity indices at the feature sequence level (Chao1, Ace, Shannon, and Simpson indices) to assess the biodiversity within samples. Beta diversity indices of samples were visualized using NMDS plots. The FAPROTAX software was used to predict the potential functional composition of the microbial community. Genomic DNA of soil samples was extracted using the CTAB method, and the purity and concentration of DNA were detected by 1% agarose gel electrophoresis. A suitable amount of sample was taken in a centrifuge tube and diluted with sterile water to a concentration of 1 ng·µL-1. For bacterial sequencing, the V3 and V4 variable regions were amplified by PCR using primers 341F (5’-CCTAYGGGRBGCASCAG-3’) and 806 R (5’-GGACTACNNGGGTATCTAAT-3’). For fungal community sequencing, the ITS3 and ITS4 regions were amplified by PCR using primers 2024F (5’-GCATCGATGAAGAACGCAGC-3’) and 2409 R (5’-TCCTCCGCTTATTGATATGC-3’). All PCR products were analyzed by 2% agarose gel electrophoresis. PCR products with equal concentrations were thoroughly mixed. The mixed products were then purified using a 2% agarose gel with 1×TAE buffer. The target bands were excised and recovered using the Qifollowagen Gel Extraction Kit. The prepared libraries were constructed using the Illumina TruSeq® DNA PCR-Free Sample Preparation Kit (Illumina, USA). The quantification and quality control of the libraries were performed using Qubit and library detection. After passing the quality control, the libraries were sequenced using the NovaSeq 6000 PE250 platform. Bioinformatics Analysis The analysis process mainly followed the “Atacama soil microbiome tutorial” in the QIIME2 documentation. The raw sequence fastq files were imported into a format suitable for subsequent processing in QIIME2 using the qiime tools import plugin. Subsequently, the QIIME2 dada2 plugin was utilized for quality control, trimming, denoising, merging, and removing chimeras, resulting in the final feature sequence table. The QIIME2 feature-classifier plugin was employed to align the representative sequences of the Amplicon Sequence Variants (ASVs) against the pre-trained 13_8 version of the GREENGENES database with 99% similarity (trimmed to the V3-V4 region based on the 341F/806 R primers), obtaining the taxonomic classification information of the species . Next, the QIIME2 feature-table plugin was used to remove all contaminating mitochondrial and chloroplast sequences. The LEfSe method was then applied to identify differentially abundant bacteria among groups and samples. Subsequently, the QIIME2 core-diversity plugin was employed to calculate the diversity matrix, including alpha diversity indices at the feature sequence level (Chao1, Ace, Shannon, and Simpson indices) to assess the biodiversity within samples. Beta diversity indices of samples were visualized using NMDS plots. The FAPROTAX software was used to predict the potential functional composition of the microbial community. Data analysis Statistical significance tests were conducted using the R girmess package, multcompView package, and agricolae package. PCA analysis was performed using the R FactoMineR package. Random forest regression analysis was conducted using the R RandomForest package. Data visualization was carried out using the R ggplot2 package. Results 3.1 Effects of smash-ridge tillage on agronomic traits and soil enzyme activity of tobacco plants (1) Performance of agronomic traits of tobacco plants with different smash-ridge tillage treatment (2) Performance of soil enzyme activity of tobacco plants with different smash-ridge tillage treatment depicts the performance of six agronomic traits throughout the growth process of tobacco plants under various cultivation conditions. Among the samples, the treatment with a smash-ridge tillage depth of 30 cm (ST30) exhibited a significantly greater leaf area (1080.87 cm 2 ) compared to the control group (CK), whereas the other five indicators showed no significant differences in comparison to the control group (CK). Conversely, the treatment with a smash-ridge tillage depth of 50 cm (ST50) demonstrated significantly higher average plant height (119.33 cm), average stem circumference (10.25 cm), average fresh root weight (93.67 g), stem fresh weight (93.67 g), and leaf area (1276.61 cm 2 ) compared to the control group (CK) ( p < 0.05). presents the impact of different cultivation methods on soil enzyme activity. The results indicate that soil samples obtained from smash-ridge tillage cultivation exhibited significantly higher activities of urease, nitrite reductase, alkaline phosphatase, and aryl sulfatase enzymes compared to soil samples from traditional plow-tilling cultivation (CK). Moreover, when comparing different smash-ridge tillage depths, the soil samples with a smash-ridge tillage depth of 50 cm displayed significantly greater activities of these four enzymes in comparison to soil samples with a smash-ridge tillage depth of 30 cm. Only β-glucosidase activity exhibited higher levels in the CK samples compared to the samples from the smash-ridge tillage at a depth of 30 cm ( p < 0.05). 3.2 Impact of smash-ridge tillage on soil microbial community composition and function in tobacco plantation (1) Performance of soil microbial community composition with different smash-ridge tillage treatment In this study, the bacterial community composition of all soil samples was examined using 16S rDNA sequencing technology, resulting in the identification of 151 operational taxonomic units (OTUs) at the phylum level. As shown in , the dominant bacterial phyla in soil samples from different cultivation treatments were relatively consistent, including Proteobacteria, Acidobacteria , Bacteroidetes , Actinobacteria , Gemmatimonadetes , Verrucomicrobia , Chloroflexi , Nitrospirae , Firmicute s and Cyanobacteria with cumulative relative abundances exceeding 95%. Moreover, the relative abundance of Proteobacteria was significantly higher in the soil samples from smash-ridge tillage cultivation (30 cm and 50 cm) compared to the traditional cultivation method, while the relative abundance of Acidobacteria showed a significant decrease. illustrates the distribution of unique bacterial OTUs in soil samples from different cultivation methods. As depicted in the figure, both the CK sample and the 30 cm smash-ridge tillage soil sample had one unique bacterial OTU, while the 50 cm smash-ridge tillage soil sample had six unique OTUs. Based on the annotation of OTUs, species richness and diversity indices of bacterial communities in different soil samples were further analyzed. As shown in , the species richness indices (Ace index and Chao1 index) of soil samples from both f smash-ridge tillage heights (30 cm/50 cm) were significantly higher than those of the control soil samples. However, there was no significant difference in species richness indices between soil samples with different smash-ridge tillage depths. Similarly, the Shannon Index and Simpson Index of bacterial community structures in smash-ridge tillage soil samples at different depths were significantly higher than those in the control samples. However, there was no significant difference observed between the 30 cm and 50 cm depths. β biodiversity is often used to describe the compositional differences of species among different habitats. In this study, the differences in soil bacterial community structure under different cultivation methods were analyzed using NMDS ( ), with a stress value of 0.1043 based on Bray-Curtis dissimilarity. The figure clearly illustrates that the soil samples from different cultivation treatments were categorized into three major groups based on their microbial community characteristics. (2) Performance of carbon source utilization characteristics of bacterial communities with different smash-ridge tillage treatment An analysis was conducted to examine the carbon source utilization of soil microbial communities across different soil layers, resulting in the identification of six principal components. The first two principal components exhibited variance contribution rates of 85.1% and 7.1% respectively ( ). Given that the cumulative variance contribution rate of the first two principal components exceeds 90%, subsequent analysis focused on these two components. presents the contribution analysis of the six major carbon sources to principal components 1 and 2. The results indicate that all carbon sources are situated in the first and fourth quadrants, implying a positive influence on principal component 1. The ranking of principal component contribution degrees, in descending order, are as follows: C2 > C5 > C3 > C1 > C6 > C4.(C1: carbonhydrate; C2: carboxylic acid; C3: amine;C4: amino acid;C5: phenolic acid;C6: polymer) (3) Performance of microbial functions prediction with different smash-ridge tillage treatment Significant differences in carbon source utilization were observed among soil samples subjected to various tillage practices. illustrates the relative positions of different tillage soil samples within the two-dimensional map defined by the first two principal components. The CK soil samples are primarily distributed in the second and third quadrants, exhibiting proximity to the distribution of fine ridge tillage samples at a depth of 30 cm. Furthermore, there is an overlap with the 95% confidence ellipse, suggesting similarities in carbon source utilization between the two sample groups. In contrast, the fine ridge tillage samples at a depth of 50 cm are predominantly located in the first and fourth quadrants, with no intersection with the 95% confidence ellipse of the preceding two treatment soil samples. This observation implies that their carbon source utilization differs from the aforementioned groups. The FAPROTAX database utilizes published scientific literature on relevant bacterial strains to annotate and map OTUs (Operational Taxonomic Units) to established metabolic or other ecologically relevant functions. It is particularly suitable for functional annotation and prediction of biogeochemical cycling processes in environmental samples. In this study, a total of 688 OTUs were identified at the species level and were used for predicting the main functions of soil microorganisms using the FAPROTAX database. displays the top 20 ranked functional pathways based on their relative abundances. The analysis reveals that among the top twenty functional pathways, twelve are closely associated with nitrogen utilization and the biogeochemical cycling of nitrogen. Furthermore, a comparison of functional pathways in different tillage samples indicates that the expression levels of 15 pathways in the 50 cm fine ridge soil sample are significantly higher than those in the control sample. This study employed random forest regression analysis to identify the top 10 OTUs at the genus level that exert the most significant influence on tobacco plant leaf area ( ). Among these, the three most influential OTUs were Granulicella , Ilumatobacter , and Modestobacter , with importance values of 2.37, 2.20, and 2.12, respectively. Following closely were OTUs ranked 4th to 7th: Caldilinea , Bacillus , Nitrobacteria , Kaistobacter and Sphingomonas , each exhibiting importance values exceeding 2. Effects of smash-ridge tillage on agronomic traits and soil enzyme activity of tobacco plants (1) Performance of agronomic traits of tobacco plants with different smash-ridge tillage treatment (2) Performance of soil enzyme activity of tobacco plants with different smash-ridge tillage treatment depicts the performance of six agronomic traits throughout the growth process of tobacco plants under various cultivation conditions. Among the samples, the treatment with a smash-ridge tillage depth of 30 cm (ST30) exhibited a significantly greater leaf area (1080.87 cm 2 ) compared to the control group (CK), whereas the other five indicators showed no significant differences in comparison to the control group (CK). Conversely, the treatment with a smash-ridge tillage depth of 50 cm (ST50) demonstrated significantly higher average plant height (119.33 cm), average stem circumference (10.25 cm), average fresh root weight (93.67 g), stem fresh weight (93.67 g), and leaf area (1276.61 cm 2 ) compared to the control group (CK) ( p < 0.05). presents the impact of different cultivation methods on soil enzyme activity. The results indicate that soil samples obtained from smash-ridge tillage cultivation exhibited significantly higher activities of urease, nitrite reductase, alkaline phosphatase, and aryl sulfatase enzymes compared to soil samples from traditional plow-tilling cultivation (CK). Moreover, when comparing different smash-ridge tillage depths, the soil samples with a smash-ridge tillage depth of 50 cm displayed significantly greater activities of these four enzymes in comparison to soil samples with a smash-ridge tillage depth of 30 cm. Only β-glucosidase activity exhibited higher levels in the CK samples compared to the samples from the smash-ridge tillage at a depth of 30 cm ( p < 0.05). Impact of smash-ridge tillage on soil microbial community composition and function in tobacco plantation (1) Performance of soil microbial community composition with different smash-ridge tillage treatment In this study, the bacterial community composition of all soil samples was examined using 16S rDNA sequencing technology, resulting in the identification of 151 operational taxonomic units (OTUs) at the phylum level. As shown in , the dominant bacterial phyla in soil samples from different cultivation treatments were relatively consistent, including Proteobacteria, Acidobacteria , Bacteroidetes , Actinobacteria , Gemmatimonadetes , Verrucomicrobia , Chloroflexi , Nitrospirae , Firmicute s and Cyanobacteria with cumulative relative abundances exceeding 95%. Moreover, the relative abundance of Proteobacteria was significantly higher in the soil samples from smash-ridge tillage cultivation (30 cm and 50 cm) compared to the traditional cultivation method, while the relative abundance of Acidobacteria showed a significant decrease. illustrates the distribution of unique bacterial OTUs in soil samples from different cultivation methods. As depicted in the figure, both the CK sample and the 30 cm smash-ridge tillage soil sample had one unique bacterial OTU, while the 50 cm smash-ridge tillage soil sample had six unique OTUs. Based on the annotation of OTUs, species richness and diversity indices of bacterial communities in different soil samples were further analyzed. As shown in , the species richness indices (Ace index and Chao1 index) of soil samples from both f smash-ridge tillage heights (30 cm/50 cm) were significantly higher than those of the control soil samples. However, there was no significant difference in species richness indices between soil samples with different smash-ridge tillage depths. Similarly, the Shannon Index and Simpson Index of bacterial community structures in smash-ridge tillage soil samples at different depths were significantly higher than those in the control samples. However, there was no significant difference observed between the 30 cm and 50 cm depths. β biodiversity is often used to describe the compositional differences of species among different habitats. In this study, the differences in soil bacterial community structure under different cultivation methods were analyzed using NMDS ( ), with a stress value of 0.1043 based on Bray-Curtis dissimilarity. The figure clearly illustrates that the soil samples from different cultivation treatments were categorized into three major groups based on their microbial community characteristics. (2) Performance of carbon source utilization characteristics of bacterial communities with different smash-ridge tillage treatment An analysis was conducted to examine the carbon source utilization of soil microbial communities across different soil layers, resulting in the identification of six principal components. The first two principal components exhibited variance contribution rates of 85.1% and 7.1% respectively ( ). Given that the cumulative variance contribution rate of the first two principal components exceeds 90%, subsequent analysis focused on these two components. presents the contribution analysis of the six major carbon sources to principal components 1 and 2. The results indicate that all carbon sources are situated in the first and fourth quadrants, implying a positive influence on principal component 1. The ranking of principal component contribution degrees, in descending order, are as follows: C2 > C5 > C3 > C1 > C6 > C4.(C1: carbonhydrate; C2: carboxylic acid; C3: amine;C4: amino acid;C5: phenolic acid;C6: polymer) (3) Performance of microbial functions prediction with different smash-ridge tillage treatment Significant differences in carbon source utilization were observed among soil samples subjected to various tillage practices. illustrates the relative positions of different tillage soil samples within the two-dimensional map defined by the first two principal components. The CK soil samples are primarily distributed in the second and third quadrants, exhibiting proximity to the distribution of fine ridge tillage samples at a depth of 30 cm. Furthermore, there is an overlap with the 95% confidence ellipse, suggesting similarities in carbon source utilization between the two sample groups. In contrast, the fine ridge tillage samples at a depth of 50 cm are predominantly located in the first and fourth quadrants, with no intersection with the 95% confidence ellipse of the preceding two treatment soil samples. This observation implies that their carbon source utilization differs from the aforementioned groups. The FAPROTAX database utilizes published scientific literature on relevant bacterial strains to annotate and map OTUs (Operational Taxonomic Units) to established metabolic or other ecologically relevant functions. It is particularly suitable for functional annotation and prediction of biogeochemical cycling processes in environmental samples. In this study, a total of 688 OTUs were identified at the species level and were used for predicting the main functions of soil microorganisms using the FAPROTAX database. displays the top 20 ranked functional pathways based on their relative abundances. The analysis reveals that among the top twenty functional pathways, twelve are closely associated with nitrogen utilization and the biogeochemical cycling of nitrogen. Furthermore, a comparison of functional pathways in different tillage samples indicates that the expression levels of 15 pathways in the 50 cm fine ridge soil sample are significantly higher than those in the control sample. This study employed random forest regression analysis to identify the top 10 OTUs at the genus level that exert the most significant influence on tobacco plant leaf area ( ). Among these, the three most influential OTUs were Granulicella , Ilumatobacter , and Modestobacter , with importance values of 2.37, 2.20, and 2.12, respectively. Following closely were OTUs ranked 4th to 7th: Caldilinea , Bacillus , Nitrobacteria , Kaistobacter and Sphingomonas , each exhibiting importance values exceeding 2. Discussion Numerous research results have demonstrated that the composition and activity of soil microbial communities are influenced by agricultural tillage practices . Relevant studies have found that fine ridge tillage provides favorable conditions for the survival of deep soil microorganisms, leading to a significant increase in the total bacterial count in the 0-40 cm soil layer compared to conventional tillage. Fine ridge tillage effectively enhances the abundance of soil microorganisms and increases the quantities of ammonia-oxidizing bacteria, nitrogen-fixing bacteria, inorganic phosphate bacteria, and potassium bacteria in the soil compared to rotary tillage and deep plowing treatments . The results of this study indicate that fine ridge tillage significantly improves both the abundance and biodiversity of bacterial communities compared to traditional rotary tillage, which is consistent with previous research findings. The reason behind this result may be attributed to the ability of fine ridge tillage to effectively reduce soil compaction and promote the formation of soil macroaggregates , . The dynamic changes in soil aggregate structure feedback on the control of microbial activity and the differentiation of soil organic matter composition, thereby establishing a coupling effect between soil organic matter composition-soil aggregate structure-microbial community structure dynamics , ultimately altering bacterial community structure and diversity. In this study, following various heights of smash-ridge tillage treatment on tobacco planting soil samples, both bacterial richness (Ace index and Chao1 index) and diversity (Shannon Index and Simpson Index) were significantly enhanced compared to the control samples. This enhancement indicates that smash-ridge tillage substantially influences the structural characteristics of soil bacterial communities. The alterations in soil bacterial community structure characteristics ultimately have implications for the functionality of the entire soil microbial ecosystem , which is consistent with the fundamental outcomes of this research. The diversity of soil microbial bacterial and fungal communities directly influences microbial functional diversity ( ). Existing research has shown a close correlation between soil microbial community structure and carbon source metabolism . The interactions between copiotrophs (R-strategists) and oligotrophs (K-strategists), which possess functional genes related to metabolism, genetic information processing, and organic systems, in the soil promote the absorption of nutrients and the dissolution of iron and small molecules from the soil, accelerating their own growth, and enhancing bacterial metabolic activity, thereby increasing bacterial functional community diversity , . This study also found that tillage practices affect soil enzyme activity ( ). The transformation of nitrogen, phosphorus, and the metabolism and transport of carbohydrates in the soil are closely related to urease, transformation enzymes, and alkaline phosphatase , . Fine ridge tillage improves the crop-soil environment through physical means, giving vitality to crop roots and promoting downward growth . This leads to the release of more enzymes to enhance the exchange of nutrients with the soil microenvironment , resulting in higher soil enzyme activity in fine ridge tillage compared to traditional tillage. Enzymes in the soil originate not only from crop roots but also from the metabolic activities of soil microorganisms . Heterotrophic bacteria release enzymes with corresponding functions to participate in soil nutrient cycling for their own development . Compared to fine smash tillage, traditional tillage hinders the development of crop surface roots and soil microbial activity , resulting in low soil enzyme activity. Fine ridge tillage improves the physical structure of cultivated soil, promotes the growth of crop roots and the development of soil microorganisms, and releases corresponding enzymes, accelerating soil nutrient cycling. As a result, soil enzyme activity in fine ridge tillage is higher than that in traditional tillage. Other studies have also shown that soil enzyme activity not only reflects the degree of organic matter hydrolysis but also indicates the ability to utilize carbon sources . This study also found that soil enzyme activity significantly influences microbial functional diversity, indicating that fine ridge tillage can indirectly enhance microbial functional diversity by increasing enzyme activity. This strengthens the nutrient utilization ability of soil microbial communities, improves community functional structure, enhances soil ecosystem stability, and enhances the performance of tobacco plants in field agronomic traits. In this study, a total of 677 genus-level OTUs were annotated through comparison. Using random forest regression analysis, the top 10 microbial OTUs with the highest correlation to the total leaf area of individual tobacco plants (number of leaves × leaf area) were selected. Granulicella (genus of granule-forming bacteria) and Ilumatobacter (genus of acidophilic bacteria) both belong to the Acidobacteria phylum, and are typically associated with carbon source utilization in the soil . Previous research suggests that Granulicella can serve as an indicator organism for evaluating specific soil types and the succession patterns of soil microbial communities in typical black soil areas under different land use practices , . The latter is sensitive to nitrogen content in the soil environment and plays a role in enhancing the stability of soil microbial bacterial community structure, promoting bacterial synergistic interactions, and increasing resistance to interference. Furthermore, related studies have found significant negative correlations between the relative abundance of these two microbial genera and the distribution and quantity of some soil-borne pathogenic microorganisms in the soil environment . This indicates their significant potential for application in inhibiting soil-borne pathogenic microbial agents. The genus Modestobacter belongs to the Actinobacteria phylum. Studies have found that this microbial genus is closely associated with the cycling of phosphorus (P) in soil due to the presence of functional genes such as alkaline phosphatase genes (phoA/phoD) and phosphorus transport-related genes ( phoU, phnC, phnD, phnE, phoB, phoH, phoP, phoR, pitH, ppk, pstA, pstB, pstC, and pstS ). These genes contribute to the efficient utilization of P in the soil and help plants adapt to adverse soil environments , . The genus Bacillus comprises aerobic or facultative anaerobic bacteria, many of which possess significant agricultural importance. Most species within this genus are saprophytic bacteria capable of producing diverse beneficial metabolites. As a result, they play a crucial role in promoting crop growth and facilitating carbon sequestration , This bacterial genus is also a key species involved in the formation of biofilms in soil. It assists plants in resisting environmental stresses, adapting to extreme conditions, promoting soil aggregation, and enhancing soil structure stability. Nitrobacteria are involved in the biogeochemical cycle of nitrogen, improving the efficiency of plant nitrogen utilization . Research has shown a close relationship between the abundance of Nitrobacteria in soil and the disease resistance of tobacco plants . Kaistobacter has been considered a signature microorganism for soil improvement in some studies, and its presence significantly increases in intercropping agricultural systems. Sphingomonas , as a common beneficial microorganism in soil, can produce plant growth hormones, promote root growth, increase plant biomass, and enhance plant stress tolerance .For example, the genus Sphingomonas is significantly enriched in disease-resistant rice and can interfere with the biosynthetic pathway of virulence factors in rice bacterial pathogens by secreting small-molecule signals such as salicylic acid, thereby improving rice disease resistance , .It can also act as an antagonist of plant pathogenic fungi and secrete valuable exopolysaccharides . Research has also shown that continuous cropping reduces the abundance of Sphingomonas in soil . The above results indicate that smash-ridge tillage not only enhances the microbial community diversity in cultivated soil but also increases the abundance of functional microorganisms involved in soil nutrient transformation, thereby playing an important role in maintaining the stability of the soil microecological environment. Previous studies have shown that root growth can release enzymes and carbon sources , thereby influencing the microbial decomposition of carbon sources and promoting an increase in soil microbial diversity . There are differences in the microbial community structure between smash-ridge tillage and traditional farming, and the reason for this could be that smash-ridge tillage promotes root growth, leading to the release of more carbon sources by the roots and consequently enhancing microbial functional diversity. Research has demonstrated that diseased roots increase the release of carboxylic acids and amine carbon sources, thereby enhancing the utilization of these carbon sources by soil microorganisms This indicates that smash-ridge tillage can reduce the probability of crop root disease, as diseased roots secrete fewer carboxylic acids and amine carbon sources into the surrounding soil, altering the microbial capacity to utilize carbon sources. Among the top twenty functional pathways identified in this study, twelve of them are closely related to nitrogen utilization and the biogeochemical cycle of nitrogen. Additionally, when comparing the functional pathways of different farming samples, it was found that the expression levels of 15 functional pathways were significantly higher in the raised bed soil samples at a depth of 50 cm compared to the control samples. This clearly indicates that smash-ridge tillage can effectively activate the ecological functions of bacterial communities, enhance the functional activity of soil microorganisms, and assist plants in better absorbing and transporting nutrients from the soil environment. Conclusions This study investigates the effects of different farming practices on tobacco agronomic traits, soil enzyme activity, microbial community structure, and functional metabolism in tobacco-growing soils. The results demonstrate that smash-ridge tillage, compared to the conventional tillage method, significantly improves the bacterial community structure, increases functional groups involved in nutrient metabolism, and enhances soil microbial diversity. These improvements ultimately manifest in the agronomic status of tobacco plants when utilizing smash-ridge tillage. Smash-ridge tillage enhances the activities of urease, nitrite reductase, alkaline phosphatase, and aryl sulfatase, which are enzymes involved in nutrient cycling. The smash-ridge tillage technique promotes the capacity of soil microorganisms for nutrient metabolism, thereby enhancing the overall functional diversity of soil microorganisms. The diversity of soil microbial communities, functional diversity, enzyme activity, and agronomic traits of tobacco plants all indicate that the smash-ridge tillage treatment at a depth of 50 cm outperforms the treatment at a depth of 30 cm. In summary, from the perspective of soil microbial ecology, it can be concluded that smash-ridge tillage technology promotes nutrient cycling, improves soil quality, and enhances the agronomic performance of tobacco plants. This study provides a scientific basis for the application and development of smash-ridge tillage technology in the southwestern tobacco-growing regions of China. The impact of farming practices on soil quality is a long-term process. Therefore, with the long-term use of smash-ridge tillage, changes in soil physical structure and soil microbial ecology will also occur. This study only investigates the effects of smash-ridge tillage at different depths on soil enzyme activity, microbial community structure, and functional diversity for one year. In the future, long-term monitoring and tracking studies will be conducted to further investigate the changes in soil microbial ecology, as well as to explore research on soil aggregates, soil carbon content, and soil diseases.
Content validity testing of the INTERMED Self‐Assessment in a sample of adults with rheumatoid arthritis and rheumatology healthcare providers
e31d71c2-9406-47e7-b5ad-69f90ec17342
10873686
Internal Medicine[mh]
INTRODUCTION Rheumatoid arthritis (RA) is the most common autoimmune arthritis affecting approximately 1% of adults worldwide. The condition is characterised by joint inflammation leading to pain and progressive joint damage, if left untreated. Even when disease is ‘well controlled’ or in remission, individuals with RA may experience ongoing pain, fatigue and poor functional status. Furthermore, RA is associated with extra‐articular manifestations such as interstitial lung disease and comorbidities such as cardiovascular diseases and osteoporosis. There is also a high prevalence of mental health concerns such as depression and anxiety. Individuals with RA can be impacted by other social determinants of health such as employment status, food and housing insecurity and social considerations such as exclusion and discrimination, among others. These can contribute to lower quality of life, resulting in additional complexity in addressing healthcare needs. Defining the health status of patients has evolved over time. Previous focus was on number and types of comorbidities and number of medications. , Over time, the concept of care complexity has emerged, which considers both medical and nonmedical aspects of health. It describes the interplay between one or more chronic health conditions and possible mental health concerns, along with social determinants of health such as housing, food insecurity, lack of transportation, structural racism and other factors that negatively influence health or hinder access to healthcare. The INTERMED is a commonly used interview‐based instrument that measures care complexity across four biopsychosocial domains: biological, psychological, social and health systems across the past, present and future. The INTERMED was developed as a standardised way to measure complexity. It serves as a communimetric instrument to measure complexity, and also highlights domains to direct care in an interdisciplinary setting. The INTERMED can only be administered by a healthcare provider (HCP) who has completed specific training. The INTERMED was later adapted into a patient‐reported instrument called the INTERMED Self‐Assessment (IMSA). The INTERMED has been used in a wide variety of settings , , ; however, it can be challenging to administer. The interview itself is approximately 20 min in length, which may be unfeasible due to resource and time constraints. , More recently, the IMSA was developed to overcome these limitations and has undergone feasibility, face validity and reliability testing. Face validity was ascertained by asking patients the following questions: ‘Do you think that these questions were appropriate to ask?’ (yes/no) and ‘Did we miss issues pertinent to your care?’ (yes/no). No cognitive debriefing was conducted. Given the recency of its development, few study teams have tested how well the IMSA measures complexity. , Furthermore, to our knowledge, no studies exist on the psychometric properties of the IMSA in individuals living with RA. Interdisciplinary care is considered the optimal treatment model for RA patients with higher care complexity. Interdisciplinary care teams may include family physicians, physiotherapists, occupational therapists, social workers, nurses and nurse practitioners, pharmacists and mental healthcare professionals, among others, , working together to promote holistic and patient‐centred care. However, interdisciplinary care delivery is costly and potentially difficult for certain patients to access as the rheumatology workforce is primarily located in urban centres. Rheumatology workforce deficits are expected globally in the coming decades, which will exacerbate accessibility of care over time. Allocating limited resources efficiently while supporting patient needs can be challenging. Unfortunately, there is no standardised method in rheumatology to measure which patients have greater care complexity and could benefit from interdisciplinary resources. , The objective of the present study was to test the content validity of the IMSA through cognitive debriefing interviews with a sample of patients with RA and rheumatology HCPs. The results of this study contribute to the body of knowledge around care complexity in RA. This work was planned as an initial step before local implementation of the IMSA. Measuring complexity can facilitate HCP teams in efficiently directing interdisciplinary care and improve patient ‘fit’ with the resources that they need at the right time. METHODS 2.1 Study design We conducted a series of cognitive debriefing interviews with patients diagnosed with and receiving care for RA and with rheumatology HCPs. Cognitive debriefing is a method to evaluate content validity, which includes the relevance, comprehensiveness and comprehensibility of the item stems and answer sets of a psychometric instrument. , Typically, cognitive debriefing is recommended to adapt an existing instrument to a specific population and ought to be conducted before further psychometric testing and/or item reduction. The aim of the cognitive debriefing interviews was to (1) assess whether the English‐language version of the IMSA ( https://www.intermedconsortium.com/instrument/ ) was clear and understood as intended by patients with RA and by rheumatology HCPs; (2) determine any revisions required to enhance the relevance, understandability and comprehension of the item stems and answer sets; and (3) gather insight into specific topics relevant to care complexity in RA and that are not addressed in the IMSA. This project was approved by the Conjoint Health Research Ethics Board at the University of Calgary (REB20‐0422). All participants provided their consent to be involved in this project. 2.2 Study setting This study was conducted in Alberta, Canada, and involved four rheumatology clinics across the province, including the Kaye Rheumatology Clinic in Edmonton and the Northeast Rheumatology Clinic and Associates Clinic in Calgary. The study primarily centred at the Richmond Road Diagnostic and Treatment Center Rheumatology Clinic in Calgary. It is the largest rheumatology clinic in Southern Alberta providing care to thousands of patients with rheumatic diseases yearly. 2.3 Eligibility and recruitment 2.3.1 Patient participants Participants were recruited through posters and brochures displayed within and distributed by HCPs from participating rheumatology clinics in Alberta. Postings about the study were available on trusted national websites, including Arthritis Society Canada and Arthritis Research Canada. Patients were also identified and contacted by email through two studies (Rheum4U and A Better Match) conducted at the University of Calgary. Rheum4U is a quality improvement and research platform in the Division of Rheumatology at the University of Calgary and A Better Match was a study of text‐based messaging to support patient care in RA. Patients in these two studies provided consent to be contacted about future research opportunities. From those who consented to be contacted, we invited adult participants (18 years of age and older) who had a diagnosis of RA and were not concurrently participating in another study. Patients who learned about the study by posters or brochures were instructed to contact the study coordinator using the contact information provided. All patient participants were sent an email containing information about the study and a link to an online consent form, with a short demographic survey and the IMSA following this. Fifty emails were sent on 25 October 2022 to Rheum4U participants. Reminders were sent 1 and 3 weeks after the initial email. Twenty‐one participants from A Better Match were contacted between 11 July and 15 July 2022. Once consented, an email requested the participants to indicate their preferred day, time and interview method (telephone or Zoom videoconferencing) and email reminders were sent 1 week later if a participant did not reply. 2.3.2 Rheumatology HCPs Rheumatology HCPs included any physician or healthcare professional involved in rheumatology care including rheumatologists, residents, nurses, pharmacists and physiotherapists. Rheumatology HCPs were initially recruited across Alberta through email announcements sent by the University of Calgary rheumatology divisional manager and announcements at divisional rounds. To broaden recruitment, a message was included in the Canadian Rheumatology Association email newsletter to reach providers nationwide. Due to the low recruitment of HCPs, we included some members of the study team. These team members were not involved in interview guide development, nor did they review the IMSA before the interview. These members were to be involved in future implementation of this instrument in our ambulatory care settings and their feedback was valuable as part of the planning process. Furthermore, no members of the research team had any involvement in the development of the IMSA. Rheumatology HCPs, if interested in participating, were instructed to contact the study coordinator and received a link to an online consent form and a short demographic survey followed. If the consent form and survey were not completed, an email reminder was sent 1 week later. Once consented, an email requested participants to indicate their preferred day, time and interview method (telephone or Zoom videoconferencing) and email reminders were sent 1 week later if a participant did not reply. 2.4 Procedure After consenting to the study, patient–participants responded to demographic questions (year of birth, year of RA diagnosis, gender identity, highest level of education attained and race/ethnicity) and completed the IMSA. For HCPs, brief demographic information was collected (HCP type, years of practice, sex at birth and gender identity). A single semi‐structured interview guide was developed by two members of the research team (C. E. H. B. and K. D.) for both patients and HCPs (Supporting Information S1: Supplement ). Questions in the interview guide focused on comprehension of the item and answer set, confidence/recall for providing a response, relevance of the item, questions related to specific items in the IMSA and concluding questions that asked participants about the comprehensiveness of the IMSA and what care complexity challenges exist in RA that may be missing in the instrument. Two researchers trained in qualitative methods (K. D. and M. H.) conducted the interviews over Zoom through videoconferencing or by telephone using the cognitive debriefing semi‐structured interview guide. The interviewer administered each of the IMSA items one at a time verbally and encouraged participants to ‘think aloud’ when interpreting the items and providing responses. Participants were asked probing questions about their comprehension of survey items, how they arrived at answer choices (recall), appropriateness of the answer choices provided (response) and their perceived relevance of the item to their health care. When participants encountered difficulties with the items, the interviewers asked for suggestions on how to modify the item and the answer set. Interviews were audio‐recorded and transcribed verbatim by a transcription service. Transcripts were checked for accuracy by the study coordinator and identifying details were removed. By way of appreciation, participants received a $50 gift card. 2.5 Analysis Transcript analysis was guided by Willis' and Wills' protocols for analysing cognitive interview results. Analysis focused on a reparative framework, which involved ‘finding and fixing’ participant‐identified issues with the instrument. Two coders (K. D. and M. H.) independently analysed the transcripts: K. D. by IMSA item and M. H. by participant. Results were compiled across interviews and organised by IMSA item. Coders then independently created a text summary of the results for each item, noting findings for comprehension, recall, response and relevance. The coders' results were compared, and differences in text summaries and recommendations were discussed until consensus was reached in a smaller working group (K. D., M. H., C. E. H. B.). Recruitment continued until saturation occurred, which was reached after only a few interviews with both patient participants and HCPs. Saturation was defined by the recurrence and repetition of themes by both groups of participants. Results were presented to the broader research team and patient partners. Due to the volume and depth of repairs recommended by participants, consensus was reached to halt further interviews. This approach parallels Willis, where if major problems are discovered with an instrument, there is little to no benefit to continue until repairs are made. Study design We conducted a series of cognitive debriefing interviews with patients diagnosed with and receiving care for RA and with rheumatology HCPs. Cognitive debriefing is a method to evaluate content validity, which includes the relevance, comprehensiveness and comprehensibility of the item stems and answer sets of a psychometric instrument. , Typically, cognitive debriefing is recommended to adapt an existing instrument to a specific population and ought to be conducted before further psychometric testing and/or item reduction. The aim of the cognitive debriefing interviews was to (1) assess whether the English‐language version of the IMSA ( https://www.intermedconsortium.com/instrument/ ) was clear and understood as intended by patients with RA and by rheumatology HCPs; (2) determine any revisions required to enhance the relevance, understandability and comprehension of the item stems and answer sets; and (3) gather insight into specific topics relevant to care complexity in RA and that are not addressed in the IMSA. This project was approved by the Conjoint Health Research Ethics Board at the University of Calgary (REB20‐0422). All participants provided their consent to be involved in this project. Study setting This study was conducted in Alberta, Canada, and involved four rheumatology clinics across the province, including the Kaye Rheumatology Clinic in Edmonton and the Northeast Rheumatology Clinic and Associates Clinic in Calgary. The study primarily centred at the Richmond Road Diagnostic and Treatment Center Rheumatology Clinic in Calgary. It is the largest rheumatology clinic in Southern Alberta providing care to thousands of patients with rheumatic diseases yearly. Eligibility and recruitment 2.3.1 Patient participants Participants were recruited through posters and brochures displayed within and distributed by HCPs from participating rheumatology clinics in Alberta. Postings about the study were available on trusted national websites, including Arthritis Society Canada and Arthritis Research Canada. Patients were also identified and contacted by email through two studies (Rheum4U and A Better Match) conducted at the University of Calgary. Rheum4U is a quality improvement and research platform in the Division of Rheumatology at the University of Calgary and A Better Match was a study of text‐based messaging to support patient care in RA. Patients in these two studies provided consent to be contacted about future research opportunities. From those who consented to be contacted, we invited adult participants (18 years of age and older) who had a diagnosis of RA and were not concurrently participating in another study. Patients who learned about the study by posters or brochures were instructed to contact the study coordinator using the contact information provided. All patient participants were sent an email containing information about the study and a link to an online consent form, with a short demographic survey and the IMSA following this. Fifty emails were sent on 25 October 2022 to Rheum4U participants. Reminders were sent 1 and 3 weeks after the initial email. Twenty‐one participants from A Better Match were contacted between 11 July and 15 July 2022. Once consented, an email requested the participants to indicate their preferred day, time and interview method (telephone or Zoom videoconferencing) and email reminders were sent 1 week later if a participant did not reply. 2.3.2 Rheumatology HCPs Rheumatology HCPs included any physician or healthcare professional involved in rheumatology care including rheumatologists, residents, nurses, pharmacists and physiotherapists. Rheumatology HCPs were initially recruited across Alberta through email announcements sent by the University of Calgary rheumatology divisional manager and announcements at divisional rounds. To broaden recruitment, a message was included in the Canadian Rheumatology Association email newsletter to reach providers nationwide. Due to the low recruitment of HCPs, we included some members of the study team. These team members were not involved in interview guide development, nor did they review the IMSA before the interview. These members were to be involved in future implementation of this instrument in our ambulatory care settings and their feedback was valuable as part of the planning process. Furthermore, no members of the research team had any involvement in the development of the IMSA. Rheumatology HCPs, if interested in participating, were instructed to contact the study coordinator and received a link to an online consent form and a short demographic survey followed. If the consent form and survey were not completed, an email reminder was sent 1 week later. Once consented, an email requested participants to indicate their preferred day, time and interview method (telephone or Zoom videoconferencing) and email reminders were sent 1 week later if a participant did not reply. Patient participants Participants were recruited through posters and brochures displayed within and distributed by HCPs from participating rheumatology clinics in Alberta. Postings about the study were available on trusted national websites, including Arthritis Society Canada and Arthritis Research Canada. Patients were also identified and contacted by email through two studies (Rheum4U and A Better Match) conducted at the University of Calgary. Rheum4U is a quality improvement and research platform in the Division of Rheumatology at the University of Calgary and A Better Match was a study of text‐based messaging to support patient care in RA. Patients in these two studies provided consent to be contacted about future research opportunities. From those who consented to be contacted, we invited adult participants (18 years of age and older) who had a diagnosis of RA and were not concurrently participating in another study. Patients who learned about the study by posters or brochures were instructed to contact the study coordinator using the contact information provided. All patient participants were sent an email containing information about the study and a link to an online consent form, with a short demographic survey and the IMSA following this. Fifty emails were sent on 25 October 2022 to Rheum4U participants. Reminders were sent 1 and 3 weeks after the initial email. Twenty‐one participants from A Better Match were contacted between 11 July and 15 July 2022. Once consented, an email requested the participants to indicate their preferred day, time and interview method (telephone or Zoom videoconferencing) and email reminders were sent 1 week later if a participant did not reply. Rheumatology HCPs Rheumatology HCPs included any physician or healthcare professional involved in rheumatology care including rheumatologists, residents, nurses, pharmacists and physiotherapists. Rheumatology HCPs were initially recruited across Alberta through email announcements sent by the University of Calgary rheumatology divisional manager and announcements at divisional rounds. To broaden recruitment, a message was included in the Canadian Rheumatology Association email newsletter to reach providers nationwide. Due to the low recruitment of HCPs, we included some members of the study team. These team members were not involved in interview guide development, nor did they review the IMSA before the interview. These members were to be involved in future implementation of this instrument in our ambulatory care settings and their feedback was valuable as part of the planning process. Furthermore, no members of the research team had any involvement in the development of the IMSA. Rheumatology HCPs, if interested in participating, were instructed to contact the study coordinator and received a link to an online consent form and a short demographic survey followed. If the consent form and survey were not completed, an email reminder was sent 1 week later. Once consented, an email requested participants to indicate their preferred day, time and interview method (telephone or Zoom videoconferencing) and email reminders were sent 1 week later if a participant did not reply. Procedure After consenting to the study, patient–participants responded to demographic questions (year of birth, year of RA diagnosis, gender identity, highest level of education attained and race/ethnicity) and completed the IMSA. For HCPs, brief demographic information was collected (HCP type, years of practice, sex at birth and gender identity). A single semi‐structured interview guide was developed by two members of the research team (C. E. H. B. and K. D.) for both patients and HCPs (Supporting Information S1: Supplement ). Questions in the interview guide focused on comprehension of the item and answer set, confidence/recall for providing a response, relevance of the item, questions related to specific items in the IMSA and concluding questions that asked participants about the comprehensiveness of the IMSA and what care complexity challenges exist in RA that may be missing in the instrument. Two researchers trained in qualitative methods (K. D. and M. H.) conducted the interviews over Zoom through videoconferencing or by telephone using the cognitive debriefing semi‐structured interview guide. The interviewer administered each of the IMSA items one at a time verbally and encouraged participants to ‘think aloud’ when interpreting the items and providing responses. Participants were asked probing questions about their comprehension of survey items, how they arrived at answer choices (recall), appropriateness of the answer choices provided (response) and their perceived relevance of the item to their health care. When participants encountered difficulties with the items, the interviewers asked for suggestions on how to modify the item and the answer set. Interviews were audio‐recorded and transcribed verbatim by a transcription service. Transcripts were checked for accuracy by the study coordinator and identifying details were removed. By way of appreciation, participants received a $50 gift card. Analysis Transcript analysis was guided by Willis' and Wills' protocols for analysing cognitive interview results. Analysis focused on a reparative framework, which involved ‘finding and fixing’ participant‐identified issues with the instrument. Two coders (K. D. and M. H.) independently analysed the transcripts: K. D. by IMSA item and M. H. by participant. Results were compiled across interviews and organised by IMSA item. Coders then independently created a text summary of the results for each item, noting findings for comprehension, recall, response and relevance. The coders' results were compared, and differences in text summaries and recommendations were discussed until consensus was reached in a smaller working group (K. D., M. H., C. E. H. B.). Recruitment continued until saturation occurred, which was reached after only a few interviews with both patient participants and HCPs. Saturation was defined by the recurrence and repetition of themes by both groups of participants. Results were presented to the broader research team and patient partners. Due to the volume and depth of repairs recommended by participants, consensus was reached to halt further interviews. This approach parallels Willis, where if major problems are discovered with an instrument, there is little to no benefit to continue until repairs are made. RESULTS Seventeen people with RA completed the consent form for the study. Nine people participated in the interview and eight people did not respond to the interview day/time/method request email. One participant partially completed the items on demographics (gender identity, year of RA diagnosis and IMSA only) and eight participants fully completed the items (Table ). The median (Q1, Q3) age was 59.5 years (56, 62) and the median number of years lived with RA was 5 (3, 6). Eight of the nine participants identified as female and all participants identified as White/European. Most participants either had a high school diploma or equivalent or a college/nonuniversity certificate or diploma. The IMSA cut‐off score for detecting complexity is 19 ; the median score for the patients was 15 (12, 22). Three patients were deemed of higher complexity using this cut‐off score of complexity. Our sample compares similarly to other Canadian RA cohorts in terms of age, years lived with RA, education and ethnicity/race. Six rheumatology HCPs completed the consent form for the study. Five persons participated in the interview and one person did not respond to the interview day/time/method request email. Of these, three were rheumatologists, one was a rheumatology resident and one was a rheumatology clinic pharmacist. Three out of the five participants were female and had between 2 and 10 years of practice as an HCP. Two of the five HCP participants were study team members. Participant quotes were organised into reparative themes based on their nature and recurrence across the IMSA items: (1) lack of clarity and standardisation with subthemes around phrasing of items and answer sets, language regarding mental health and social roles and inconsistency across answer sets; (2) item barrelling, where items contained multiple clauses simultaneously; and (3) timeframes, such as recall period or prediction of future events. The IMSA items were then organised by these themes to develop an overview of the possible modifications to the IMSA based on the reparative framework. Most items fit under a single theme, and some items had several issues relevant to a single theme or multiple themes (Table ). Concluding thoughts about the comprehensiveness of the IMSA are also presented. Examples of how the IMSA items aligned with the three themes are presented below with participant quotes, with further examples shown in Table . Following this are remarks regarding the comprehensiveness of the IMSA. A table of IMSA items and answer sets along with participant‐recommended repairs to them are presented in Supporting Information S1: Supplement . Figure provides an overview of the IMSA item topics in relation to their domains and across their time segments. 3.1 Lack of clarity and standardisation 3.1.1 Phrasing of items and answer sets Clarity and wording concerns made up a significant portion of the results. Participants perceived that items and answer sets were sometimes poorly worded, and the intention of the items was unclear. Due to the lack of clarity, it was difficult for some patients to select an answer to an item that best fit their situation. For example, Item 4A is, ‘Do you think your doctors understand the origin of your current physical problem/s?’. It was unclear to patients and HCPs what the term ‘origin’ implied. Oh, this one confused me a little bit to tell you the truth … speaking to like, with my physical problem … I mean … with RA … I'm not sure anybody knows what causes it … so … it doesn't take them long to, you know … diagnose the problem, and I've been treated ever since. But did they actually understand the origin … of what caused it? Is that what you're asking? (Patient Participant #1, male, IMSA score 15, age unknown) I don't know what origin means. So does origin mean diagnosis, or does origin mean why I have this diagnosis? So in other words … my doctor is clear that I have rheumatoid arthritis, there's no doubt, or my doctor doesn't know why I have rheumatoid arthritis? (HCP Participant #1, male, rheumatologist, >10–20 years practicing) Another example involved Item 11: ‘Is your home living situation satisfactory? Or are adjustments needed, such as home modifications, receiving home care, or going to live somewhere else?’. Some patients were unclear as to how satisfaction with their home living situation should be interpreted: … So when I was reading this question … it made me think … how is … your emotional living at home … but then I … finishing the question, I realized that it was about like physical monitor‐ modifications that can like help you move around the house. Yeah, but the first part of the question to me sounded more like … do you have like a safe, comfortable home? Rather than … is it accessible for you? (Patient Participant #18, female, IMSA score 26, age 15–24) Um, I don't know. I think home modifications would be readily interpretable. Um, I was just thinking about like … I don't know. Yeah, I'm trying to think about, even like, like the question structure. (HCP Participant #2, female, pharmacist, >2–10 years practicing) 3.1.2 Language regarding mental health and social roles Other findings regarding item wording related to language describing mental health and social roles. Several IMSA items and/or answer sets refer to ‘psychological problems’, where participants commented on changing the phrase to ‘mental health’ to limit possible stigmatisation, as mentioned below: … I don't think people will respond well to ‘psychological problems’… I think there's a lot of … awareness … mental health is more out in the public sphere now. And so I think … mental health concerns as opposed to psychological problems, but I think also … we all have mental health. (HCP Participant #2, female, pharmacist, >2–10 years practicing) Item 9B contains an answer choice that reads, ‘I am a housewife taking care for the household and others’, referring to a caregiving role that is gendered. Participants suggested rephrasing this to be neutral and inclusive of people performing a caregiving role, regardless of sex or gender, as one of our participants articulated their ideas about a possible revision to the answer choice: So I like ‘do you work outside of the home’, or ‘do you have paid employment’, kind of language. So, um, not wanting to degrade work that happens inside the home. Um, obviously, the housewife, um‐ Yeah. So, language for that … ‘I am providing care for, you know, children and/or others in the, in the home’ or whatever. Yeah, something like that. (HCP Participant #2, female, pharmacist, >2–10 years practicing) 3.1.3 Inconsistency across answer sets The issue of inconsistency across answer sets appeared often. Throughout the IMSA, answer choices would follow a particular pattern and then include one or more answer choices deviating from the pattern. For example, Item 5 is ‘In the past 5 years, how did you cope with stressful, difficult situations?’ and answer choices B and C (Supporting Information S1: Supplement ) refer to difficulty coping in these situations, leading to tensions with partners, family or others. However, tension with others was not mentioned in all answer choices. Answer choice D refers to always experiencing difficulties with stressful and difficult situations, making a person upset and tense, which deviates from the pattern where the other answer choices describe tensions with others. This was acknowledged by two HCPs, who mention that the answer sets do not follow a similar structure across items in the IMSA overall, and how Item 5's answer set in particular followed a structure that was different from what they were expecting, as outlined by the quotes below: … it's a little bit weird how the [answer choices] don't follow the same kind of syntax or structure. Like, um, the middle two talk about the result of them … and then two [answer choice B] says, ‘Often led to tensions and problems’. And then in three [answer choice C] … I would be expecting it to say, ‘I always experience difficulties with stressful, difficult situations that lead to tensions and problems with my partner, family or other people’. Or something like that. (HCP Participant #4, male, rheumatologist, >2–10 years practicing) So, like, ‘sometimes I have difficulties with coping with stressful difficult situations’, okay, fair. But why does that, by definition, at‐ as this answer seems to suggest, that means that it results in tension, problems with partner, family or other people? So if you are someone who … feels sometimes I have difficulty coping with stressful, difficult situations, but does not sometimes result in tension, problems with my partner, family or other people. Now what‐ how would you answer the question now? (HCP Participant #1, male, rheumatologist, >10–20 years practicing) This recurred in Item 10, which is, ‘How do you generally relate to other people?’, in which answer choice B refers to experiencing tensions with others and answer choice D refers to contacts and friendships deteriorating into quarrels and conflicts (Supporting Information S1: Supplement ). Tensions, conflicts and quarrels were not mentioned in any of the other response options. Confusion around the rationale for these types of responses and the inconsistencies within the answer set were noted by two patient participants: Uh, some of the answers … I thought were a little strange …. I was wondering kind of why it kept going to the tense or the quarrels and conflicts …. Like I wasn't quite sure about that, I suppose…I guess I didn't really know why it was going to the, um, to the sort of the angrier side of things. (Patient Participant #21, female, IMSA score 14, age range 35–44) I answered this one, the first [answer choice] … I've got lots of friends and I socialize well. As it got further in, I thought, ‘Oh, isn't this interesting? Where are they going with this?’. But I didn't understand what's the point of that …. I was surprised as … it was … more negative stuff. (Patient Participant #10, female, IMSA score 18, age range 55–64) 3.1.4 Item barrelling Item barrelling refers to multiple clauses being included in an item, but only allowing for one answer choice. Some items were double, triple or even quadruple barrelled, which made them challenging to answer. Barrelling also made it difficult for researchers to know which parts of the item a patient's response applied to. A prominent example was Item 13, which asks, ‘Do you experience problems in getting the care you need due to living too far away, not having any insurance, or not speaking the language very well, or differences in culture?’. Participants, while thinking aloud, would answer each part of the item presented individually or provided ways to present this question differently as demonstrated by the quotes below: … They [the items in the question stem], they're not problems for me … because I live in the city. We have healthcare. English is our first language. Um, culture, I've been here 32 years, I guess, so (laughs) I think culture is, um, is normal to me now. (Patient Participant #20, female, IMSA score 7, age range 55–64) Too much … of a question. Yeah … You can separate these out into one, two, three, four questions and have … the various answers for each one of them or you could have … a tick box like select all that apply … type of thing … if this is an electronic survey like display logic for, for digging into … how often something happens. (HCP Participant #2, female, pharmacist, >2–10 years practicing) 3.1.5 Timeframes Some items asked patients to recall health information from years or months ago. This caused challenges for patients answering the item and HCPs commented on the relevance of the timeframes for people with RA. For example, Item 2 is ‘How difficult has it been in the past 5 years to diagnose the physical problems you experienced?’, with participants stating the following: … I was going back even farther [than 5 years] and then … I realized, ‘Well, you know what? This is five years … I'm just gonna go with the way it went down when I was actually sent along to [Rheumatologist Name]. So how difficult has it been in the past five years? [Restated the question]. Well, it's actually been six, so I just kind of went with that … I thought, Well, we need to go back to when this all started … that's a long time. But for me it was longer even (laughs). (Patient Participant #5, female, IMSA score, 12, age 65+) Um, y‐ like, you could shorten the interval, but I think, like, a lot of our patients do … have had issues for five years. I'd probably say two years would be sort of more acute and relevant, but … if you're looking at the patients who are more complex and grumbled before they got to us, then five is probably more appropriate. But you could shorten it to two. (HCP Participant #3, female, rheumatologist, >2–10 years practicing) Other items asked patients to predict future physical and mental health challenges and use of supports in the next 6 months, which both patients and HCPs found challenging. Predicting future health needs was further complicated by the nature of RA, where patients describe RA as being unpredictable. Item 17 is ‘In the next 6 months, do you expect your physical health to change?’, with patients responding with the following: I have no clue, and I feel like my doctors have no clue either. So I, I wasn't sure how to answer it. I, I felt like that was like, I don't know, a hard thing. And I'm also like going to be changing medications and like doing new things. So I don't know, those might work. They might not. (Patient Participant #18, female, IMSA score 26, age 15–24) … It sometimes takes us six months to diagnose someone. So not everyone would feel anything different in six months. But at the same time, it's hard to ask a patient, ‘How do you think about next year, or next two years?’ So I think from that perspective, it's fair to ask six months, but not necessarily. I think, that's going to change the whole picture of the patient management. (HCP Participant #5, female, rheumatology resident, practicing 2 years or less) Some items had several issues as exemplified by Item 20, which is ‘In the next 6 months, do you expect that you will be in need of more help and support?’. In addition to the challenge in predicting future needs, what patients define as ‘help and support’ and what patient expectations might be were perceived as vague: … it was kind of more like, help and support with what? Because … I feel … in certain areas of my life, I will need more … help and support. But … when it comes to my … physical health, again, I just don't know. So I think … emotionally … when it comes to like therapy and counseling and stuff like that, I think I will be in need of more of that. But again, with my physical health, I just don't know what I'll need. (Patient Participant #18, female, IMSA score 26, age 15–24) What they're [patients] expecting, based on what they presented to us at this point? Or what they're expecting based on what the diagnosis is? Or what their hopes are? So it's lots of things that you can think about, which it's hard to predict. I would take away the whole question. (HCP Participant #5, female, rheumatology resident, practicing 2 years or less) 3.1.6 Comprehensiveness of the IMSA Finally, when asked about elements of complexity that may be missing from the IMSA, most participants commented that the instrument was comprehensive in addressing potential elements relating to their care. One patient participant mentioned how their plans for retirement shifted significantly due to their diagnosis and suggested an element regarding future planning or life transitions. From an HCP perspective, one individual suggested adding question/s regarding prior and/or current social supports. This was so that physicians or other HCPs can plan for and/or provide resources for the patient, depending on their situation. Revisions were deemed necessary to all but one item in the IMSA. Due to the number and scope of repairs required to amend the IMSA for use in RA, it was unfeasible to revise it using a reparative framework, as the resulting instrument would no longer retain its measurement scheme. Lack of clarity and standardisation 3.1.1 Phrasing of items and answer sets Clarity and wording concerns made up a significant portion of the results. Participants perceived that items and answer sets were sometimes poorly worded, and the intention of the items was unclear. Due to the lack of clarity, it was difficult for some patients to select an answer to an item that best fit their situation. For example, Item 4A is, ‘Do you think your doctors understand the origin of your current physical problem/s?’. It was unclear to patients and HCPs what the term ‘origin’ implied. Oh, this one confused me a little bit to tell you the truth … speaking to like, with my physical problem … I mean … with RA … I'm not sure anybody knows what causes it … so … it doesn't take them long to, you know … diagnose the problem, and I've been treated ever since. But did they actually understand the origin … of what caused it? Is that what you're asking? (Patient Participant #1, male, IMSA score 15, age unknown) I don't know what origin means. So does origin mean diagnosis, or does origin mean why I have this diagnosis? So in other words … my doctor is clear that I have rheumatoid arthritis, there's no doubt, or my doctor doesn't know why I have rheumatoid arthritis? (HCP Participant #1, male, rheumatologist, >10–20 years practicing) Another example involved Item 11: ‘Is your home living situation satisfactory? Or are adjustments needed, such as home modifications, receiving home care, or going to live somewhere else?’. Some patients were unclear as to how satisfaction with their home living situation should be interpreted: … So when I was reading this question … it made me think … how is … your emotional living at home … but then I … finishing the question, I realized that it was about like physical monitor‐ modifications that can like help you move around the house. Yeah, but the first part of the question to me sounded more like … do you have like a safe, comfortable home? Rather than … is it accessible for you? (Patient Participant #18, female, IMSA score 26, age 15–24) Um, I don't know. I think home modifications would be readily interpretable. Um, I was just thinking about like … I don't know. Yeah, I'm trying to think about, even like, like the question structure. (HCP Participant #2, female, pharmacist, >2–10 years practicing) 3.1.2 Language regarding mental health and social roles Other findings regarding item wording related to language describing mental health and social roles. Several IMSA items and/or answer sets refer to ‘psychological problems’, where participants commented on changing the phrase to ‘mental health’ to limit possible stigmatisation, as mentioned below: … I don't think people will respond well to ‘psychological problems’… I think there's a lot of … awareness … mental health is more out in the public sphere now. And so I think … mental health concerns as opposed to psychological problems, but I think also … we all have mental health. (HCP Participant #2, female, pharmacist, >2–10 years practicing) Item 9B contains an answer choice that reads, ‘I am a housewife taking care for the household and others’, referring to a caregiving role that is gendered. Participants suggested rephrasing this to be neutral and inclusive of people performing a caregiving role, regardless of sex or gender, as one of our participants articulated their ideas about a possible revision to the answer choice: So I like ‘do you work outside of the home’, or ‘do you have paid employment’, kind of language. So, um, not wanting to degrade work that happens inside the home. Um, obviously, the housewife, um‐ Yeah. So, language for that … ‘I am providing care for, you know, children and/or others in the, in the home’ or whatever. Yeah, something like that. (HCP Participant #2, female, pharmacist, >2–10 years practicing) 3.1.3 Inconsistency across answer sets The issue of inconsistency across answer sets appeared often. Throughout the IMSA, answer choices would follow a particular pattern and then include one or more answer choices deviating from the pattern. For example, Item 5 is ‘In the past 5 years, how did you cope with stressful, difficult situations?’ and answer choices B and C (Supporting Information S1: Supplement ) refer to difficulty coping in these situations, leading to tensions with partners, family or others. However, tension with others was not mentioned in all answer choices. Answer choice D refers to always experiencing difficulties with stressful and difficult situations, making a person upset and tense, which deviates from the pattern where the other answer choices describe tensions with others. This was acknowledged by two HCPs, who mention that the answer sets do not follow a similar structure across items in the IMSA overall, and how Item 5's answer set in particular followed a structure that was different from what they were expecting, as outlined by the quotes below: … it's a little bit weird how the [answer choices] don't follow the same kind of syntax or structure. Like, um, the middle two talk about the result of them … and then two [answer choice B] says, ‘Often led to tensions and problems’. And then in three [answer choice C] … I would be expecting it to say, ‘I always experience difficulties with stressful, difficult situations that lead to tensions and problems with my partner, family or other people’. Or something like that. (HCP Participant #4, male, rheumatologist, >2–10 years practicing) So, like, ‘sometimes I have difficulties with coping with stressful difficult situations’, okay, fair. But why does that, by definition, at‐ as this answer seems to suggest, that means that it results in tension, problems with partner, family or other people? So if you are someone who … feels sometimes I have difficulty coping with stressful, difficult situations, but does not sometimes result in tension, problems with my partner, family or other people. Now what‐ how would you answer the question now? (HCP Participant #1, male, rheumatologist, >10–20 years practicing) This recurred in Item 10, which is, ‘How do you generally relate to other people?’, in which answer choice B refers to experiencing tensions with others and answer choice D refers to contacts and friendships deteriorating into quarrels and conflicts (Supporting Information S1: Supplement ). Tensions, conflicts and quarrels were not mentioned in any of the other response options. Confusion around the rationale for these types of responses and the inconsistencies within the answer set were noted by two patient participants: Uh, some of the answers … I thought were a little strange …. I was wondering kind of why it kept going to the tense or the quarrels and conflicts …. Like I wasn't quite sure about that, I suppose…I guess I didn't really know why it was going to the, um, to the sort of the angrier side of things. (Patient Participant #21, female, IMSA score 14, age range 35–44) I answered this one, the first [answer choice] … I've got lots of friends and I socialize well. As it got further in, I thought, ‘Oh, isn't this interesting? Where are they going with this?’. But I didn't understand what's the point of that …. I was surprised as … it was … more negative stuff. (Patient Participant #10, female, IMSA score 18, age range 55–64) 3.1.4 Item barrelling Item barrelling refers to multiple clauses being included in an item, but only allowing for one answer choice. Some items were double, triple or even quadruple barrelled, which made them challenging to answer. Barrelling also made it difficult for researchers to know which parts of the item a patient's response applied to. A prominent example was Item 13, which asks, ‘Do you experience problems in getting the care you need due to living too far away, not having any insurance, or not speaking the language very well, or differences in culture?’. Participants, while thinking aloud, would answer each part of the item presented individually or provided ways to present this question differently as demonstrated by the quotes below: … They [the items in the question stem], they're not problems for me … because I live in the city. We have healthcare. English is our first language. Um, culture, I've been here 32 years, I guess, so (laughs) I think culture is, um, is normal to me now. (Patient Participant #20, female, IMSA score 7, age range 55–64) Too much … of a question. Yeah … You can separate these out into one, two, three, four questions and have … the various answers for each one of them or you could have … a tick box like select all that apply … type of thing … if this is an electronic survey like display logic for, for digging into … how often something happens. (HCP Participant #2, female, pharmacist, >2–10 years practicing) 3.1.5 Timeframes Some items asked patients to recall health information from years or months ago. This caused challenges for patients answering the item and HCPs commented on the relevance of the timeframes for people with RA. For example, Item 2 is ‘How difficult has it been in the past 5 years to diagnose the physical problems you experienced?’, with participants stating the following: … I was going back even farther [than 5 years] and then … I realized, ‘Well, you know what? This is five years … I'm just gonna go with the way it went down when I was actually sent along to [Rheumatologist Name]. So how difficult has it been in the past five years? [Restated the question]. Well, it's actually been six, so I just kind of went with that … I thought, Well, we need to go back to when this all started … that's a long time. But for me it was longer even (laughs). (Patient Participant #5, female, IMSA score, 12, age 65+) Um, y‐ like, you could shorten the interval, but I think, like, a lot of our patients do … have had issues for five years. I'd probably say two years would be sort of more acute and relevant, but … if you're looking at the patients who are more complex and grumbled before they got to us, then five is probably more appropriate. But you could shorten it to two. (HCP Participant #3, female, rheumatologist, >2–10 years practicing) Other items asked patients to predict future physical and mental health challenges and use of supports in the next 6 months, which both patients and HCPs found challenging. Predicting future health needs was further complicated by the nature of RA, where patients describe RA as being unpredictable. Item 17 is ‘In the next 6 months, do you expect your physical health to change?’, with patients responding with the following: I have no clue, and I feel like my doctors have no clue either. So I, I wasn't sure how to answer it. I, I felt like that was like, I don't know, a hard thing. And I'm also like going to be changing medications and like doing new things. So I don't know, those might work. They might not. (Patient Participant #18, female, IMSA score 26, age 15–24) … It sometimes takes us six months to diagnose someone. So not everyone would feel anything different in six months. But at the same time, it's hard to ask a patient, ‘How do you think about next year, or next two years?’ So I think from that perspective, it's fair to ask six months, but not necessarily. I think, that's going to change the whole picture of the patient management. (HCP Participant #5, female, rheumatology resident, practicing 2 years or less) Some items had several issues as exemplified by Item 20, which is ‘In the next 6 months, do you expect that you will be in need of more help and support?’. In addition to the challenge in predicting future needs, what patients define as ‘help and support’ and what patient expectations might be were perceived as vague: … it was kind of more like, help and support with what? Because … I feel … in certain areas of my life, I will need more … help and support. But … when it comes to my … physical health, again, I just don't know. So I think … emotionally … when it comes to like therapy and counseling and stuff like that, I think I will be in need of more of that. But again, with my physical health, I just don't know what I'll need. (Patient Participant #18, female, IMSA score 26, age 15–24) What they're [patients] expecting, based on what they presented to us at this point? Or what they're expecting based on what the diagnosis is? Or what their hopes are? So it's lots of things that you can think about, which it's hard to predict. I would take away the whole question. (HCP Participant #5, female, rheumatology resident, practicing 2 years or less) 3.1.6 Comprehensiveness of the IMSA Finally, when asked about elements of complexity that may be missing from the IMSA, most participants commented that the instrument was comprehensive in addressing potential elements relating to their care. One patient participant mentioned how their plans for retirement shifted significantly due to their diagnosis and suggested an element regarding future planning or life transitions. From an HCP perspective, one individual suggested adding question/s regarding prior and/or current social supports. This was so that physicians or other HCPs can plan for and/or provide resources for the patient, depending on their situation. Revisions were deemed necessary to all but one item in the IMSA. Due to the number and scope of repairs required to amend the IMSA for use in RA, it was unfeasible to revise it using a reparative framework, as the resulting instrument would no longer retain its measurement scheme. Phrasing of items and answer sets Clarity and wording concerns made up a significant portion of the results. Participants perceived that items and answer sets were sometimes poorly worded, and the intention of the items was unclear. Due to the lack of clarity, it was difficult for some patients to select an answer to an item that best fit their situation. For example, Item 4A is, ‘Do you think your doctors understand the origin of your current physical problem/s?’. It was unclear to patients and HCPs what the term ‘origin’ implied. Oh, this one confused me a little bit to tell you the truth … speaking to like, with my physical problem … I mean … with RA … I'm not sure anybody knows what causes it … so … it doesn't take them long to, you know … diagnose the problem, and I've been treated ever since. But did they actually understand the origin … of what caused it? Is that what you're asking? (Patient Participant #1, male, IMSA score 15, age unknown) I don't know what origin means. So does origin mean diagnosis, or does origin mean why I have this diagnosis? So in other words … my doctor is clear that I have rheumatoid arthritis, there's no doubt, or my doctor doesn't know why I have rheumatoid arthritis? (HCP Participant #1, male, rheumatologist, >10–20 years practicing) Another example involved Item 11: ‘Is your home living situation satisfactory? Or are adjustments needed, such as home modifications, receiving home care, or going to live somewhere else?’. Some patients were unclear as to how satisfaction with their home living situation should be interpreted: … So when I was reading this question … it made me think … how is … your emotional living at home … but then I … finishing the question, I realized that it was about like physical monitor‐ modifications that can like help you move around the house. Yeah, but the first part of the question to me sounded more like … do you have like a safe, comfortable home? Rather than … is it accessible for you? (Patient Participant #18, female, IMSA score 26, age 15–24) Um, I don't know. I think home modifications would be readily interpretable. Um, I was just thinking about like … I don't know. Yeah, I'm trying to think about, even like, like the question structure. (HCP Participant #2, female, pharmacist, >2–10 years practicing) Language regarding mental health and social roles Other findings regarding item wording related to language describing mental health and social roles. Several IMSA items and/or answer sets refer to ‘psychological problems’, where participants commented on changing the phrase to ‘mental health’ to limit possible stigmatisation, as mentioned below: … I don't think people will respond well to ‘psychological problems’… I think there's a lot of … awareness … mental health is more out in the public sphere now. And so I think … mental health concerns as opposed to psychological problems, but I think also … we all have mental health. (HCP Participant #2, female, pharmacist, >2–10 years practicing) Item 9B contains an answer choice that reads, ‘I am a housewife taking care for the household and others’, referring to a caregiving role that is gendered. Participants suggested rephrasing this to be neutral and inclusive of people performing a caregiving role, regardless of sex or gender, as one of our participants articulated their ideas about a possible revision to the answer choice: So I like ‘do you work outside of the home’, or ‘do you have paid employment’, kind of language. So, um, not wanting to degrade work that happens inside the home. Um, obviously, the housewife, um‐ Yeah. So, language for that … ‘I am providing care for, you know, children and/or others in the, in the home’ or whatever. Yeah, something like that. (HCP Participant #2, female, pharmacist, >2–10 years practicing) Inconsistency across answer sets The issue of inconsistency across answer sets appeared often. Throughout the IMSA, answer choices would follow a particular pattern and then include one or more answer choices deviating from the pattern. For example, Item 5 is ‘In the past 5 years, how did you cope with stressful, difficult situations?’ and answer choices B and C (Supporting Information S1: Supplement ) refer to difficulty coping in these situations, leading to tensions with partners, family or others. However, tension with others was not mentioned in all answer choices. Answer choice D refers to always experiencing difficulties with stressful and difficult situations, making a person upset and tense, which deviates from the pattern where the other answer choices describe tensions with others. This was acknowledged by two HCPs, who mention that the answer sets do not follow a similar structure across items in the IMSA overall, and how Item 5's answer set in particular followed a structure that was different from what they were expecting, as outlined by the quotes below: … it's a little bit weird how the [answer choices] don't follow the same kind of syntax or structure. Like, um, the middle two talk about the result of them … and then two [answer choice B] says, ‘Often led to tensions and problems’. And then in three [answer choice C] … I would be expecting it to say, ‘I always experience difficulties with stressful, difficult situations that lead to tensions and problems with my partner, family or other people’. Or something like that. (HCP Participant #4, male, rheumatologist, >2–10 years practicing) So, like, ‘sometimes I have difficulties with coping with stressful difficult situations’, okay, fair. But why does that, by definition, at‐ as this answer seems to suggest, that means that it results in tension, problems with partner, family or other people? So if you are someone who … feels sometimes I have difficulty coping with stressful, difficult situations, but does not sometimes result in tension, problems with my partner, family or other people. Now what‐ how would you answer the question now? (HCP Participant #1, male, rheumatologist, >10–20 years practicing) This recurred in Item 10, which is, ‘How do you generally relate to other people?’, in which answer choice B refers to experiencing tensions with others and answer choice D refers to contacts and friendships deteriorating into quarrels and conflicts (Supporting Information S1: Supplement ). Tensions, conflicts and quarrels were not mentioned in any of the other response options. Confusion around the rationale for these types of responses and the inconsistencies within the answer set were noted by two patient participants: Uh, some of the answers … I thought were a little strange …. I was wondering kind of why it kept going to the tense or the quarrels and conflicts …. Like I wasn't quite sure about that, I suppose…I guess I didn't really know why it was going to the, um, to the sort of the angrier side of things. (Patient Participant #21, female, IMSA score 14, age range 35–44) I answered this one, the first [answer choice] … I've got lots of friends and I socialize well. As it got further in, I thought, ‘Oh, isn't this interesting? Where are they going with this?’. But I didn't understand what's the point of that …. I was surprised as … it was … more negative stuff. (Patient Participant #10, female, IMSA score 18, age range 55–64) Item barrelling Item barrelling refers to multiple clauses being included in an item, but only allowing for one answer choice. Some items were double, triple or even quadruple barrelled, which made them challenging to answer. Barrelling also made it difficult for researchers to know which parts of the item a patient's response applied to. A prominent example was Item 13, which asks, ‘Do you experience problems in getting the care you need due to living too far away, not having any insurance, or not speaking the language very well, or differences in culture?’. Participants, while thinking aloud, would answer each part of the item presented individually or provided ways to present this question differently as demonstrated by the quotes below: … They [the items in the question stem], they're not problems for me … because I live in the city. We have healthcare. English is our first language. Um, culture, I've been here 32 years, I guess, so (laughs) I think culture is, um, is normal to me now. (Patient Participant #20, female, IMSA score 7, age range 55–64) Too much … of a question. Yeah … You can separate these out into one, two, three, four questions and have … the various answers for each one of them or you could have … a tick box like select all that apply … type of thing … if this is an electronic survey like display logic for, for digging into … how often something happens. (HCP Participant #2, female, pharmacist, >2–10 years practicing) Timeframes Some items asked patients to recall health information from years or months ago. This caused challenges for patients answering the item and HCPs commented on the relevance of the timeframes for people with RA. For example, Item 2 is ‘How difficult has it been in the past 5 years to diagnose the physical problems you experienced?’, with participants stating the following: … I was going back even farther [than 5 years] and then … I realized, ‘Well, you know what? This is five years … I'm just gonna go with the way it went down when I was actually sent along to [Rheumatologist Name]. So how difficult has it been in the past five years? [Restated the question]. Well, it's actually been six, so I just kind of went with that … I thought, Well, we need to go back to when this all started … that's a long time. But for me it was longer even (laughs). (Patient Participant #5, female, IMSA score, 12, age 65+) Um, y‐ like, you could shorten the interval, but I think, like, a lot of our patients do … have had issues for five years. I'd probably say two years would be sort of more acute and relevant, but … if you're looking at the patients who are more complex and grumbled before they got to us, then five is probably more appropriate. But you could shorten it to two. (HCP Participant #3, female, rheumatologist, >2–10 years practicing) Other items asked patients to predict future physical and mental health challenges and use of supports in the next 6 months, which both patients and HCPs found challenging. Predicting future health needs was further complicated by the nature of RA, where patients describe RA as being unpredictable. Item 17 is ‘In the next 6 months, do you expect your physical health to change?’, with patients responding with the following: I have no clue, and I feel like my doctors have no clue either. So I, I wasn't sure how to answer it. I, I felt like that was like, I don't know, a hard thing. And I'm also like going to be changing medications and like doing new things. So I don't know, those might work. They might not. (Patient Participant #18, female, IMSA score 26, age 15–24) … It sometimes takes us six months to diagnose someone. So not everyone would feel anything different in six months. But at the same time, it's hard to ask a patient, ‘How do you think about next year, or next two years?’ So I think from that perspective, it's fair to ask six months, but not necessarily. I think, that's going to change the whole picture of the patient management. (HCP Participant #5, female, rheumatology resident, practicing 2 years or less) Some items had several issues as exemplified by Item 20, which is ‘In the next 6 months, do you expect that you will be in need of more help and support?’. In addition to the challenge in predicting future needs, what patients define as ‘help and support’ and what patient expectations might be were perceived as vague: … it was kind of more like, help and support with what? Because … I feel … in certain areas of my life, I will need more … help and support. But … when it comes to my … physical health, again, I just don't know. So I think … emotionally … when it comes to like therapy and counseling and stuff like that, I think I will be in need of more of that. But again, with my physical health, I just don't know what I'll need. (Patient Participant #18, female, IMSA score 26, age 15–24) What they're [patients] expecting, based on what they presented to us at this point? Or what they're expecting based on what the diagnosis is? Or what their hopes are? So it's lots of things that you can think about, which it's hard to predict. I would take away the whole question. (HCP Participant #5, female, rheumatology resident, practicing 2 years or less) Comprehensiveness of the IMSA Finally, when asked about elements of complexity that may be missing from the IMSA, most participants commented that the instrument was comprehensive in addressing potential elements relating to their care. One patient participant mentioned how their plans for retirement shifted significantly due to their diagnosis and suggested an element regarding future planning or life transitions. From an HCP perspective, one individual suggested adding question/s regarding prior and/or current social supports. This was so that physicians or other HCPs can plan for and/or provide resources for the patient, depending on their situation. Revisions were deemed necessary to all but one item in the IMSA. Due to the number and scope of repairs required to amend the IMSA for use in RA, it was unfeasible to revise it using a reparative framework, as the resulting instrument would no longer retain its measurement scheme. DISCUSSION To our knowledge, our study is the first to assess content validity by performing cognitive debriefing of the IMSA with RA patients and rheumatology HCPs. Overall, the IMSA was perceived to be comprehensive in terms of complexity domains. However, participants pointed out several examples that demonstrated that the instrument did not meet key elements necessary for content validity for use in RA. This was primarily due to issues around item relevance, comprehension and recall periods used for the items. This impression was shared by both patients and HCPs in our sample, with similar themes identified by both groups of participants. The INTERMED and IMSA have been used in various populations such as primary care, chronic pain, orthopaedics and mental healthcare settings. The IMSA has been translated into several languages, including French, German, Dutch, Spanish and Japanese, and a version for older people, INTERMED for the Elderly (IM‐E), has also been developed. One example of the IMSA being used to allocate resources is a study performed in Switzerland that compared the IMSA to general practitioner (GP) opinion in predicting which patients would benefit from case manager intervention in primary care due to care complexity. This study had 331 patients complete the English version of the IMSA, then researchers asked GPs for their opinion on that same patient being assigned a case manager, with four possible answers varying on yes or no, and the usefulness of being assigned a case manager. Researchers found that three items from the IMSA were sufficient to predict GP opinions about patients benefiting from case management (Q1 A and B [presence of physical problems and presence of a long‐lasting chronic condition], Q3 [Physical limitations in performing daily activities] and Q9 A, B and C (employment status and engagement in activities/hobbies/volunteering]). Given the simplicity and brevity of a three‐item IMSA, researchers commented that this can serve as an ideal prescreening instrument and information could be readily available in electronic medical records, possibly eliminating questionnaire burden on patients. These three items only focus on biological and social aspects and may exclude other salient considerations in determining care complexity. The study also centred on GP perceptions of care complexity, which may differ from care complexity for rheumatologic conditions, as study authors note that complexity is context‐dependent. These differences could affect the use and relevance of instruments that attempt to measure complexity. In rheumatology, the use of care complexity measurement instruments has been scant. One of the few studies in this area is Koch et al., where researchers used the INTERMED to identify patients who experience care complexity and correlate this with measures of healthcare utilisation and disease activity (including the Health Assessment Questionnaire, Short Form Survey 36‐item (SF‐36), Rheumatoid Arthritis Disease Activity Index, radiologic erosion scores and disease activity score‐28). Patients who scored high on the INTERMED (indicating care complexity) scored particularly high in the psychological domain of the INTERMED and were more likely to receive disability compensation. These patients also had increased healthcare utilisation in terms of emergency room visits, hospitalisations and specialist visits, even though they did not differ from ‘noncomplex’ patients in terms of disease activity. Patients who had care complexity also reported a worse global assessment of disease compared to physician global assessment of disease and scored worse on the SF‐36 on the domains of general health and vitality. While the INTERMED was effective at detecting care complexity, it is an interview‐based method that can allow for items to be clarified and explained if ambiguous, which may impact patient responses and influence their overall score. While the INTERMED and IMSA measure important domains, they do not address RA‐specific domains. Authors of Koch et al. found that several patient‐reported surveys were required to supplement the INTERMED to determine the areas in which care complexity existed, which may be unfeasible for both patients and HCPs due to time constraints in clinical settings. , An ongoing literature review being conducted by our study team is investigating care complexity factors in RA and their impact on processes of care and/or RA outcomes. While some of these factors are captured in the IMSA including mental health, coping and housing instability, other complexity factors more pertinent to RA, such as functional status and disability, concordance with treatment plan and social support, were asked in a way that made it difficult for patients to understand and relate to their own experiences. These may uniquely drive complexity in RA and incorporating these domains into a tailored instrument may measure complexity more effectively in this population. In addition to the lack of complexity measures and their specificity, there is a paucity of data regarding RA and care complexity generally. The available evidence suggests that RA is highly impacted by biopsychosocial factors. For example, race and intersectional inequalities can increase the risk of developing RA and these factors can sustain poor disease control through biological and nonbiological factors such as inequitable access to care and underrepresentation in research where disease outcomes in diverse populations are not well understood. , , , Furthermore, social determinants of health can potentially reinforce one another. Income inequality plays a causal role in poor health outcomes for various reasons and can contribute to disparities such as food insecurity. Working‐age adults who are food‐insecure have an elevated risk of developing chronic health conditions such as arthritis. People with RA who experience food insecurity have higher odds of depression and the odds increase when food insecurity increases in severity. People with arthritis who are food‐insecure are also at risk for medication underuse. , This is particularly concerning as pharmaceutical treatment of RA is critical to controlling disease activity and medication costs have risen dramatically due to the availability of biologics and targeted synthetic disease‐modifying antirheumatic drugs. Unfortunately, people who have overlapping and complex health and social needs are often met with uncoordinated and inefficient care, cycling through multiple providers and/or systems, with little benefit from these interactions. The current standard of care in RA involves routine investigations that primarily measure biological aspects including disease activity, functional status, comorbidities and medication use. Mental health concerns in people with RA are common, but there is inconsistent screening of these conditions during clinic visits. Beyond these measures, other social determinants of health and/or elements of complexity that can significantly impact RA outcomes are rarely considered. Measuring these needs would improve the knowledge that we have about care complexity in RA and allow for efficient use of resources to provide high‐quality and accessible care to improve health outcomes. A framework for understanding care complexity in RA and its impact on care processes and patient outcomes will be presented in future work. There were some limitations to this study. Most HCPs interviewed were rheumatologists, with limited representation of other healthcare professions involved in rheumatological care. Two study team members were interviewed; however, their perceptions of the IMSA did not differ significantly from patient participants or the rest of the HCP participants. There was also little diversity in patients with RA who were interviewed in terms of age and ethnicity, as saturation was reached early. Furthermore, patients who actively seek care, participate in research studies and/or review rheumatology‐related websites were recruited, and our sample may be less representative of people with higher care complexity. The online English‐language version of the IMSA was used in this study and it is unclear if other studies or research teams revised the instrument to address the issues that our study found. In conclusion, the IMSA did not have content validity within our sample of adults with RA and rheumatology HCPs. Due to the significant number of repairs and revisions suggested by participants, these would have changed the instrument significantly from its validated form, necessitating a change to its scoring and interpretation. The creation of a new complexity instrument in RA and/or exploration of other instruments that measure care complexity are needed that are relevant and appropriate for use in RA to better allocate resources to patients and improve health outcomes. Future instrument development in this area should involve cognitive debriefing alongside repairs in an iterative fashion. Doing this early on in instrument development can help ensure inclusive language, clarity and consistency of item and answer choices anchors and appropriate length of the instrument. Kiran Dhiman : Investigation; methodology; project administration; writing—original draft; writing—review and editing; formal analysis. Marc Hall : Investigation; writing—review and editing; formal analysis. Trafford Crump : Conceptualisation; funding acquisition; writing—review and editing. Diane Lacaille : Conceptualisation; funding acquisition; writing—review and editing. Glen Hazlewood : Conceptualisation; funding acquisition; writing—review and editing. Cheryl Barnabe : Conceptualisation; funding acquisition; writing—review and editing. Steven Katz : Conceptualisation; funding acquisition; writing—review and editing. Jason Sutherland : Conceptualisation; funding acquisition; writing—review and editing. Erika Dempsey : Conceptualisation; funding acquisition; writing—review and editing. The authors declare no conflicts of interest. Supporting information. Click here for additional data file.